Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
1,900 | 2,726 | ?0-norm Minimization for Basis Selection
David Wipf and Bhaskar Rao ?
Department of Electrical and Computer Engineering
University of California, San Diego, CA 92092
[email protected], [email protected]
Abstract
Finding the sparsest, or minimum ?0 -norm, representation of a signal
given an overcomplete dictionary of basis vectors is an important problem in many application domains. Unfortunately, the required optimization problem is often intractable because there is a combinatorial increase
in the number of local minima as the number of candidate basis vectors
increases. This deficiency has prompted most researchers to instead minimize surrogate measures, such as the ?1 -norm, that lead to more tractable
computational methods. The downside of this procedure is that we have
now introduced a mismatch between our ultimate goal and our objective
function. In this paper, we demonstrate a sparse Bayesian learning-based
method of minimizing the ?0 -norm while reducing the number of troublesome local minima. Moreover, we derive necessary conditions for
local minima to occur via this approach and empirically demonstrate that
there are typically many fewer for general problems of interest.
1
Introduction
Sparse signal representations from overcomplete dictionaries find increasing relevance in
many application domains [1, 2]. The canonical form of this problem is given by,
min kwk0 ,
s.t. t = ?w,
(1)
w
where ? ? ?N ?M is a matrix whose columns represent an overcomplete basis (i.e.,
rank(?) = N and M > N ), w is the vector of weights to be learned, and t is the signal vector. The actual cost function being minimized represents the ?0 -norm of w (i.e., a
count of the nonzero elements in w). In this vein, we seek to find weight vectors whose
entries are predominantly zero that nonetheless allow us to accurately represent t.
While our objective function is not differentiable, several algorithms have nonetheless been
derived that (i), converge almost surely to a solution that locally minimizes (1) and more
importantly (ii), when initialized sufficiently close, converge to a maximally sparse solution
that also globally optimizes an alternate objective function. For convenience, we will refer
these approaches as local sparsity maximization (LSM) algorithms. For example, procedures that minimize ?p -norm-like diversity measures1 have been developed such that, if p is
chosen sufficiently small, we obtain a LSM algorithm [2, 3]. Likewise, a Gaussian entropybased LSM algorithm called FOCUSS has been developed and successfully employed to
?
1
This work was supported by an ARCS Foundation scholarship, DiMI grant 22-8376 and Nissan.
Minimizing a diversity measure is often equivalent to maximizing sparsity.
solve Neuromagnetic imaging problems [4]. A similar algorithm was later discovered in
[5] from the novel perspective of a Jeffrey?s noninformative prior. While all of these methods are potentially very useful candidates for solving (1), they suffer from one significant
drawback: as we have discussed in [6], every local minima of (1) is also a local minima to
the LSM algorithms.
Unfortunately, there are many local minima to (1). In fact, every basic feasible solution w?
to t = ?w is such a local minimum.2 To see this, we note that the value of kw? k0 at such a
solution is less than or equal to N . Any other feasible solution can be written as w? + ?w? ,
where w? ? Null(?). For simplicity, if we assume that every subset of N columns of ? are
linearly independent, the unique representation property (URP), then w? must necessarily
have nonzero elements in locations that differ from w? . Consequently, any solution in the
neighborhood of w? will satisfy kw? k0 < kw? + ?w? k0 . This ensures that all such w?
represent local minima to (1).
The number of basic feasible solutions is bounded between MN?1 + 1 and M
N ; the exact
number depends on t and ? [4]. Regardless, when M ? N , we have an large number
of local minima and not surprisingly, we often converge to one of them using currently
available LSM algorithms. One potential remedy is to employ a convex surrogate measure
in place of the ?0 -norm that leads to a more tractable optimization problem. The most
common choice is to use the alternate norm kwk1 , which creates a unimodal optimization
problem that can be solved via linear programming or interior point methods. The considerable price we must pay, however, is that the global minimum of this objective function
need not coincide with the sparsest solutions to (1).3 As such, we may fail to recover the
maximally sparse solution regardless of the initialization we use (unlike a LSM procedure).
In this paper, we will demonstrate an alternative algorithm for solving (1) using a sparse
Bayesian learning (SBL) framework. Our objective is twofold. First, we will prove that,
unlike minimum ?1 -norm methods, the global minimum of the SBL cost function is only
achieved at the minimum ?0 -norm solution to t = ?w. Later, we will show that this
method is only locally minimized at a subset of basic feasible solutions and therefore, has
fewer local minima than current LSM algorithms.
2
Sparse Bayesian Learning
Sparse Bayesian learning was initially developed as a means of performing robust regression using a hierarchal prior that, empirically, has been observed to encourage sparsity [8].
The most basic formulation proceeds as follows. We begin with an assumed likelihood
model of our signal t given fixed weights w,
1
(2)
p(t|w) = (2?? 2 )?N/2 exp ? 2 kt ? ?wk2 .
2?
To provide a regularizing mechanism, we assume the parameterized weight prior,
M
Y
wi2
?1/2
p(w; ?) =
(2??i )
exp ?
,
(3)
2?i
i=1
where ? = [?1 , . . . , ?M ]T is a vector of M hyperparameters controlling the prior variance
of each weight. These hyperparameters (along with the error variance ? 2 if necessary) can
be estimated from the data by marginalizing over the weights and then performing ML
optimization. The marginalized pdf is given by
Z
1 T ?1
?1/2
?N/2
p(t; ?) = p(t|w)p(w; ?)dw = (2?)
|?t |
exp ? t ?t t ,
(4)
2
2
A basic feasible solution is a solution with at most N nonzero entries.
In very restrictive settings, it has been shown that the minimum ?1 -norm solution can equal the
minimum ?0 -norm solution [7]. But in practical situations, this result often does not apply.
3
where ?t , ? 2 I + ???T and we have introduced the notation ? , diag(?).4 This procedure is referred to as evidence maximization or type-II maximum likelihood [8]. Equivalently, and more conveniently, we may instead minimize the cost function
L(?; ? 2 ) = ? log p(t; ?) ? log |?t | + tT ??1
t t
(5)
using the EM algorithm-based update rule for the (k + 1)-th iteration given by
?1
? (k+1) = E w|t; ?(k) = ?T ? + ? 2 ??1
w
?T t
(6)
(k)
?1
T
? (k) w
? (k)
?(k+1) = E diag(wwT )|t; ?(k) = diag w
+ ? ?2 ?T ? + ??1
.(7)
(k)
? = E[w|t; ?M L ],
Upon convergence to some ?M L , we compute weight estimates as w
? ? t. We now quantify the relationship between this
allowing us to generate t? = ?w
procedure and ?0 -norm minimization.
?0 -norm minimization via SBL
3
Although SBL was initially developed in a regression context, it can nonetheless be easily
adapted to handle (1) by fixing ? 2 to some ? and allowing ? ? 0. To accomplish this we
must reexpress the SBL iterations to handle the low noise limit. Applying standard matrix
identities and the general result
?1
= U ?,
(8)
lim U T ?I + U U T
??0
we arrive at the modified update rules
?
1/2
1/2
? (k) = ?(k) ??(k) t
w
?
1/2
1/2
T
? (k) w
? (k) + I ? ?(k) ??(k) ? ?(k) ,
?(k+1) = diag w
(9)
(10)
? (k) are feasiwhere (?)? denotes the Moore-Penrose pseudo-inverse. We observe that all w
? (k) for all ?(k) .5 Also, upon convergence we can easily show that if ?M L
ble, i.e., t = ?w
? will also be sparse while maintaining feasibility. Thus, we have potentially
is sparse, w
found an alternative way of solving (1) that is readily computable via the modified iterations above. Perhaps surprisingly, these update rules are equivalent to the Gaussian entropy1/2
1/2
based LSM iterations derived in [2, 5], with the exception of the [I ? ?(k) (??(k) )? ?]?(k)
term.
A firm connection with ?0 -norm minimization is realized when we consider the global
minimum of L(?; ? 2 = ?) in the limit as ? approaches zero. We will now quantify this
relationship via the following theorem, which extends results from [6].
Theorem 1. Let W0 denote the set of weight vectors that globally minimize (1). Furthermore, let W(?) be defined as the set of weight vectors
T
?1 ?1 T
2
? t, ??? = arg min L(?; ? = ?) .
w?? : w?? = ? ? + ????
(11)
?
Then in the limit as ? ? 0, if w ? W(?), then w ? W0 .
4
We will sometimes use ? and ? interchangeably when appropriate.
This assumes that t is in the span of the columns of ? associated with nonzero elements in ?,
which will always be the case if t is in the span of ? and all ? are initialized to nonzero values.
5
A full proof of this result is available at [9]; however, we provide a brief sketch here. First,
we know from [6] that every local minimum of L(?; ? 2 = ?) is achieved at a basic feasible
solution ?? (i.e., a solution with N or fewer nonzero entries), regardless of ?. Therefore,
in our search for the global minimum, we only need examine the space of basic feasible
solutions. As we allow ? to become sufficiently small, we show that
L(?? ; ? 2 = ?) = (N ? k?? k0 ) log(?) + O(1)
(12)
at any such solution. This result is minimized when k?? k0 is as small as possible. A maximally sparse basic feasible solution, which we denote ??? , can only occur with nonzero
elements aligned with the nonzero elements of some w ? W0 . In the limit as ? ? 0, w??
becomes feasible while maintaining the same sparsity profile as ??? , leading to the stated
result.
This result demonstrates that the SBL framework can provide an effective proxy to direct
?0 -norm minimization. More importantly, we will now show that the limiting SBL cost
function, which we will henceforth denote
?1
L(?) , lim L(?; ? 2 = ?) = log ???T + tT ???T
t,
(13)
??0
need not have the same problematic local minima profile as other methods.
4
Analysis of Local Minima
Thus far, we have demonstrated that there is a close affiliation between the limiting SBL
framework and the the minimization problem posed by (1). We have not, however, provided
any concrete reason why SBL should be preferred over current LSM methods of finding
sparse solutions. In fact, this preference is not established until we carefully consider the
problem of convergence to local minima.
As already mentioned, the problem with current methods of minimizing kwk0 is that every basic feasible solution unavoidably becomes a local minimum. However, what if we
could somehow eliminate all or most of these extrema. For example, consider the alternate
objective function f (w) , min(kwk0 , N ), leading to the optimization problem
min f (w),
s.t. t = ?w.
(14)
w
While the global minimum remains unchanged, we observe that all local minima occurring at non-degenerate basic feasible solutions have been effectively removed.6 In other
words, at any solution w? with N nonzero entries, we can always add a small component
?w? ? Null(?) without increasing f (w), since f (w) can never be greater than N . Therefore, we are free to move from basic feasible solution to basic feasible solution without
increasing f (w). Also, the rare degenerate basic solutions that do remain, even if suboptimal, are sparser by definition. Therefore, locally minimizing our new problem (14) is
clearly superior to locally minimizing (1). But how can we implement such a minimization
procedure, even approximately, in practice?
Although we cannot remove all non-degenerate local minima and still retain computational
feasibility, it is possible to remove many of them, providing some measure of approximation to (14). This is effectively what is accomplished using SBL as will be demonstrated
below. Specifically, we will derive necessary conditions required for a non-degenerate basic feasible solution to represent a local minimum to L(?). We will then show that these
conditions are frequently not satisfied, implying that there are potentially many fewer local
minima. Thus, locally minimizing L(?) comes closer to (locally) minimizing (14) than
current LSM methods, which in turn, is closer to globally minimizing kwk0 .
6
A degenerate basic feasible solution has strictly less than N nonzero entries; however, the vast
majority of local minima are non-degenerate, containing exactly N nonzero entries.
4.1
Necessary Conditions for Local Minima
As previously stated, all local minima to L(?) must occur at basic feasible solutions ?? .
Now suppose that we have found a (non-degenerate) ?? with associated w? computed
via (9) and we would like to assess whether or not it is a local minimum to our SBL
e the
e denote the N nonzero elements of w? and ?
cost function. For convenience, let w
?1
e
e
e and w
e = ? t). Intuitively, it would seem
associated columns of ? (therefore, t = ?w
likely that if we are not at a true local minimum, then there must exist at least one additional
e e.g., some x, that is somehow aligned with or in some respect similar
column of ? not in ?,
e
to t. Moreover, the significance of this potential alignment must be assessed relative to ?.
But how do we quantify this relationship for the purposes of analyzing local minima?
As it turns out, a useful metric for comparison is realized when we decompose x with
e which forms a basis in ?N under the URP assumption. For example, we
respect to ?,
e v , where v
e is a vector of weights analogous to w.
e As
may form the decomposition x = ?e
will be shown below, the similarity required between x and t (needed for establishing the
existence of a local minimum) may then be realized by comparing the respective weights
e and w.
e In more familiar terms, this is analogous to suggesting that similar signals have
v
e is ?close enough? to w,
e then
similar Fourier expansions. Loosely, we may expect that if v
e
x is sufficiently close to t (relative to all other columns in ?) such that we are not at a local
minimum. We formalize this idea via the following theorem:
Theorem 2. Let ? satisfy the URP and let ?? represent a vector of hyperparameters with
e ?1 t. Let X
e =?
N and only N nonzero entries and associated basic feasible solution w
e and V the set of weights given by
denote
the set of M ? Nocolumns of ? not included in ?
n
?1
e
e:v
e = ? x, x ? X . Then ?? is a local minimum of L(?) only if
v
X vei vej
<0
w
ei w
ej
?e
v ? V.
(15)
i6=j
Proof : If ?? truly represents a local minimum of our cost function, then the following
condition must hold for all x ? X :
?L(?? )
? 0,
(16)
??x
where ?x denotes the hyperparameter corresponding to the basis vector x. In words, we
cannot reduce L(?? ) along a positive gradient because this would push ?x below zero.
Using the matrix inversion lemma, the determinant identity, and some algebraic manipulations, we arrive at the expression
2
?L(?? )
xT Bx
tT Bx
=
?
,
(17)
??x
1 + ?x xT Bx
1 + ?x xT Bx
e?
e?
e T )?1 . Since we have assumed that we are at a local minimum, it is
where B , (?
e = diag(w)
e 2 leading to the expression
straightforward to show that ?
e ?T diag(w)
e ?1 .
e ?2 ?
B=?
(18)
Substituting this expression into (17) and evaluating at the point ?x = 0, the above gradient
reduces to
?L(?? )
eT diag(w
e ?1 w
e ?T ) ? w
e ?1 w
e ?T v
e,
=v
(19)
??x
?1 T
e ?1 , [w
where w
e1?1 , . . . , w
eN
] . This leads directly to the stated theorem.
This theorem provides a useful picture of what is required for local minima to exist and
more importantly, why many basic feasible solutions are not local minimum. Moreover,
there are several convenient ways in which we can interpret this result to accommodate a
more intuitive perspective.
4.2
A Simple Geometric Interpretation
e match up with w,
e then the
In general terms, if the signs of each of the elements in a given v
specified condition will be violated and we cannot be at a local minimum. We can illustrate
this geometrically as follows.
To begin, we note that our cost function L(?) is invariant with respect to reflections of
any basis vectors about the origin, i.e., we can multiply any column of ? by ?1 and the
cost function does not change. Returning to a candidate local minimum with associated
e we may therefore assume, without loss of generality, that ?
e ? ?diag
e
?,
(sgn(w)), giving
e
us the decomposition t = ?w,
w > 0. Under this assumption, we see that t is located
e We can infer that if any x ? X (i.e.,
in the convex cone formed by the columns of ?.
e
e must
any column of ? not in ?) lies in this convex cone, then the associated coefficients v
all be positive by definition (likewise, by a similar argument, any x in the convex cone of
e leads to the same result). Consequently, Theorem 2 ensures that we are not at a local
??
minimum. The simple 2D example shown in Figure 1 helps to illustrate this point.
1
1
0.8
0.8
t
0.6
0.4
t
0.6
?2
?2
0.4
0.2
0.2
?
0
?
0
1
1
?0.2
?0.2
x
?0.4
?0.4
?0.6
?0.6
?0.8
?0.8
?1
?1
?0.8
?0.6
?0.4
?0.2
0
0.2
0.4
0.6
0.8
1
?1
?1
x
?0.8
?0.6
?0.4
?0.2
0
0.2
0.4
0.6
0.8
1
Figure 1: 2D example with a 2 ? 3 dictionary ? (i.e., N = 2 and M = 3) and a basic
e = [?1 ?2 ]. Left: In this case, x = ?3 does not
feasible solution using the columns ?
penetrate the convex cone containing t, and we do not satisfy the conditions of Theorem 2.
This configuration does represent a minimizing basic feasible solution. Right: Now x is in
the cone and therefore, we know that we are not at a local minimum; but this configuration
does represent a local minimum to current LSM methods.
Alternatively, we can cast this geometric perspective in terms of relative cone sizes. For
e
example, let C?
e represent the convex cone (and its reflection) formed by ?. Then we are
not at a local minimum to L(?) if there exists a second convex cone C formed from a
subset of columns of ? such that t ? C ? C?
e , i.e., C is a tighter cone containing t. In
Figure 1(right), we obtain a tighter cone by swapping x for ?2 .
While certainly useful, we must emphasize that in higher dimensions, these geometric
e we
conditions are much weaker than (15), e.g., if all x are not in the convex cone of ?,
still may not be at a local minimum. In fact, to guarantee a local minimum, all x must
be reasonably far from this cone as quantified
by (15). Of course the ultimate reduction
in local minima from the MN?1 + 1 to M
N bounds is dependent on the distribution of
M/N
1.3
1.6
2.0
2.4
3.0
SBL Local Minimum %
4.9%
4.0%
3.2%
2.3%
1.6%
Table 1: Given 1000 trials where FOCUSS has converged to a suboptimal local minimum,
we tabulate the percentage of times the local minimum is also a local minimum to SBL.
M/N refers to the overcompleteness ratio of the dictionary used, with N fixed at 20. Results using other algorithms are similar.
basis vectors in t-space. In general, it is difficult to quantify this reduction except in a few
special cases.7 However, we will now proceed to empirically demonstrate that the overall
reduction in local minima is substantial when the basis vectors are randomly distributed.
5
Empirical Comparisons
To show that the potential reduction in local minima derived above translates into concrete
results, we conducted a simulation study using randomized basis vectors distributed isometrically in t-space. Randomized dictionaries are of interest in signal processing and other
disciplines [2, 7] and represent a viable benchmark for testing basis selection methods.
Moreover, we have performed analogous experiments with other dictionary types (such as
pairs of orthobases) leading to similar results (see [9] for some examples).
Our goal was to demonstrate that current LSM algorithms often converge to local minima
that do not exist in the SBL cost function. To accomplish this, we repeated the following
procedure for dictionaries of various sizes. First, we generate a random N ? M ? whose
columns are each drawn uniformly from a unit sphere. Sparse weight vectors w0 are randomly generated with kw0 k0 = 7 (and uniformly distributed amplitudes on the nonzero
components). The vector of target values is then computed as t = ?w0 . The LSM algorithm is then presented with t and ? and attempts to learn the minimum ?0 -norm solutions.
The experiment is repeated a sufficient number of times such that we collect 1000 examples
where the LSM algorithm converges to a local minimum. In all these cases, we check if the
condition stipulated by Theorem 2 applies, allowing us to determine if the given solution is
a local minimum to the SBL algorithm or not. The results are contained in Table 1 for the
FOCUSS LSM algorithm. We note that, the larger the overcompleteness ratio M/N , the
larger the total number of LSM local minima (via the bounds presented earlier). However,
there also appears to be a greater probability that SBL can avoid any given one.
In many cases where we found that SBL was not locally minimized, we initialized the
SBL algorithm in this location and observed whether or not it converged to the optimal
solution. In roughly 50% of these cases, it escaped to find the maximally sparse solution.
The remaining times, it did escape in accordance with theory; however, it converged to
another local minimum. In contrast, when we initialize other LSM algorithms at an SBL
local minima, we always remain trapped as expected.
6
Discussion
In practice, we have consistently observed that SBL outperforms current LSM algorithms
in finding maximally sparse solutions (e.g., see [9]). The results of this paper provide a
very plausible explanation for this improved performance: conventional LSM procedures
are very likely to converge to local minima that do not exist in the SBL landscape. However,
7
For example, in the special case where t is proportional
to a single column of ?, we can show
that the number of local minima reduces from MN?1 +1 to 1, i.e., we are left with a single minimum.
it may still be unclear exactly why this happens. In conclusion, we give a brief explanation
that provides insight into this issue.
Consider the canonical FOCUSS LSM algorithm or the Figueiredo algorithm from [5]
(with ? 2 fixed to zero, the Figueiredo algorithm is actually equivalent to the FOCUSS
algorithm). These methods essentially solve the problem
M
X
min
log |wi |,
s.t. t = ?w,
(20)
w
i=1
where the objective function is proportional to the Gaussian entropy measure. In contrast,
we can show that, up to a scale factor, any minimum of L(?) must also be a minimum of
N
X
min
log ?i (?),
s.t. ? ? ?? ,
(21)
?
i=1
where ?i (?) is the i-th eigenvalue of ???T and ?? is the convex set {?
?1
tT ???T
t ? 1, ? ? 0}.
:
In both instances, we are minimizing a Gaussian entropy measure over a convex constraint
set. The crucial difference resides in the particular parameterization applied to this measure. In (20), we see that if any subset of |wi |?s becomes significantly small (e.g., as we
approach a basic feasible solution), we enter the basin of a local minimum because the associated log |wi | terms becomes enormously negative; hence the one-to-one correspondence
between basic feasible solutions and local minima of the LSM algorithms.
In contrast, when working with (21), many of the ?i ?s may approach zero without becoming
trapped, as long as ???T remains reasonably well-conditioned. In other words, since ?
is overcomplete, up to M ? N of the ?i ?s can be zero while still maintaining a full set
of nonzero eigenvalues to ???T , so no term in the summation is driven towards minus
infinity as occurred above. Thus, we can switch from one basic feasible solution to another
in many instances while still reducing our objective function. It is in this respect that SBL
approximates the minimization of the alternative objective posed by (14).
References
[1] S.S. Chen, D.L. Donoho, and M.A. Saunders, ?Atomic decomposition by basis pursuit,? SIAM
Journal on Scientific Computing, vol. 20, no. 1, pp. 33?61, 1999.
[2] B.D. Rao and K. Kreutz-Delgado, ?An affine scaling methodology for best basis selection,?
IEEE Transactions on Signal Processing, vol. 47, no. 1, pp. 187?200, January 1999.
[3] R.M. Leahy and B.D. Jeffs, ?On the design of maximally sparse beamforming arrays,? IEEE
Transactions on Antennas and Propagation, vol. 39, no. 8, pp. 1178?1187, Aug. 1991.
[4] I. F. Gorodnitsky and B. D. Rao, ?Sparse signal reconstruction from limited data using FOCUSS:
A re-weighted minimum norm algorithm,? IEEE Transactions on Signal Processing, vol. 45, no.
3, pp. 600?616, March 1997.
[5] M.A.T. Figueiredo, ?Adaptive sparseness using Jeffreys prior,? Neural Information Processing
Systems, vol. 14, pp. 697?704, 2002.
[6] D.P. Wipf and B.D. Rao, ?Sparse Bayesian learning for basis selection,? IEEE Transactions on
Signal Processing, vol. 52, no. 8, pp. 2153?2164, 2004.
[7] D.L. Donoho and M. Elad, ?Optimally sparse representation in general (nonorthogonal) dictionaries via ?1 minimization,? Proc. National Academy of Sciences, vol. 100, no. 5, pp. 2197?2202,
March 2003.
[8] M.E. Tipping, ?Sparse Bayesian learning and the relevance vector machine,? Journal of Machine
Learning Research, vol. 1, pp. 211?244, 2001.
[9] D.P. Wipf and B.D. Rao, ?Some results on sparse Bayesian learning,? ECE Department Technical Report, University of California, San Diego, 2005.
| 2726 |@word trial:1 determinant:1 inversion:1 norm:18 seek:1 simulation:1 decomposition:3 minus:1 accommodate:1 delgado:1 reduction:4 configuration:2 tabulate:1 outperforms:1 current:7 comparing:1 written:1 readily:1 must:11 noninformative:1 remove:2 update:3 implying:1 fewer:4 parameterization:1 urp:3 provides:2 lsm:21 location:2 preference:1 along:2 direct:1 become:1 viable:1 prove:1 expected:1 roughly:1 examine:1 frequently:1 globally:3 actual:1 kwk0:4 increasing:3 becomes:4 begin:2 provided:1 moreover:4 bounded:1 notation:1 null:2 what:3 minimizes:1 developed:4 finding:3 extremum:1 guarantee:1 pseudo:1 every:5 isometrically:1 exactly:2 returning:1 demonstrates:1 unit:1 grant:1 positive:2 engineering:1 local:58 accordance:1 limit:4 troublesome:1 analyzing:1 establishing:1 becoming:1 approximately:1 initialization:1 wk2:1 quantified:1 collect:1 limited:1 unique:1 practical:1 testing:1 atomic:1 practice:2 implement:1 procedure:8 empirical:1 significantly:1 convenient:1 word:3 refers:1 convenience:2 close:4 selection:4 interior:1 cannot:3 context:1 applying:1 equivalent:3 conventional:1 demonstrated:2 maximizing:1 straightforward:1 regardless:3 convex:10 simplicity:1 penetrate:1 rule:3 insight:1 array:1 importantly:3 leahy:1 dw:1 handle:2 analogous:3 limiting:2 diego:2 controlling:1 suppose:1 target:1 exact:1 programming:1 origin:1 element:7 located:1 vein:1 observed:3 electrical:1 solved:1 ensures:2 removed:1 mentioned:1 substantial:1 neuromagnetic:1 solving:3 creates:1 upon:2 basis:14 easily:2 k0:6 various:1 effective:1 neighborhood:1 saunders:1 firm:1 whose:3 posed:2 solve:2 larger:2 plausible:1 elad:1 antenna:1 differentiable:1 eigenvalue:2 reconstruction:1 aligned:2 unavoidably:1 degenerate:7 academy:1 intuitive:1 convergence:3 converges:1 help:1 derive:2 illustrate:2 fixing:1 aug:1 come:1 quantify:4 differ:1 drawback:1 stipulated:1 sgn:1 decompose:1 tighter:2 gorodnitsky:1 summation:1 strictly:1 hold:1 sufficiently:4 exp:3 nonorthogonal:1 substituting:1 dictionary:8 purpose:1 proc:1 combinatorial:1 currently:1 successfully:1 overcompleteness:2 weighted:1 minimization:9 clearly:1 gaussian:4 always:3 modified:2 avoid:1 ej:1 derived:3 focus:6 consistently:1 rank:1 likelihood:2 check:1 contrast:3 dependent:1 typically:1 eliminate:1 initially:2 arg:1 overall:1 issue:1 special:2 initialize:1 equal:2 never:1 represents:2 kw:3 wipf:3 minimized:4 report:1 escape:1 employ:1 few:1 randomly:2 national:1 familiar:1 jeffrey:1 attempt:1 interest:2 multiply:1 certainly:1 alignment:1 truly:1 swapping:1 kt:1 encourage:1 closer:2 necessary:4 respective:1 loosely:1 initialized:3 re:1 overcomplete:4 instance:2 column:13 earlier:1 downside:1 rao:5 maximization:2 cost:9 entry:7 subset:4 rare:1 conducted:1 optimally:1 accomplish:2 randomized:2 siam:1 retain:1 discipline:1 concrete:2 satisfied:1 containing:3 henceforth:1 leading:4 bx:4 suggesting:1 potential:3 diversity:2 vei:1 coefficient:1 satisfy:3 depends:1 later:2 performed:1 recover:1 minimize:4 ass:1 formed:3 variance:2 likewise:2 landscape:1 bayesian:7 accurately:1 dimi:1 researcher:1 converged:3 definition:2 nonetheless:3 pp:8 associated:7 proof:2 lim:2 formalize:1 amplitude:1 carefully:1 actually:1 appears:1 wwt:1 higher:1 tipping:1 methodology:1 maximally:6 improved:1 formulation:1 generality:1 furthermore:1 until:1 sketch:1 working:1 ei:1 propagation:1 somehow:2 perhaps:1 scientific:1 true:1 remedy:1 hence:1 nonzero:15 moore:1 interchangeably:1 pdf:1 tt:4 demonstrate:5 reflection:2 novel:1 sbl:22 predominantly:1 common:1 superior:1 empirically:3 discussed:1 interpretation:1 occurred:1 approximates:1 interpret:1 refer:1 significant:1 enter:1 i6:1 similarity:1 add:1 perspective:3 optimizes:1 driven:1 manipulation:1 hierarchal:1 affiliation:1 kwk1:1 vej:1 accomplished:1 minimum:72 greater:2 additional:1 employed:1 surely:1 converge:5 determine:1 signal:10 ii:2 full:2 unimodal:1 reduces:2 infer:1 technical:1 match:1 sphere:1 escaped:1 long:1 e1:1 feasibility:2 basic:23 regression:2 essentially:1 metric:1 iteration:4 represent:9 sometimes:1 achieved:2 crucial:1 unlike:2 beamforming:1 bhaskar:1 seem:1 reexpress:1 enough:1 switch:1 suboptimal:2 reduce:1 idea:1 computable:1 translates:1 whether:2 expression:3 ultimate:2 suffer:1 algebraic:1 proceed:1 useful:4 locally:7 generate:2 exist:4 percentage:1 canonical:2 problematic:1 sign:1 estimated:1 trapped:2 hyperparameter:1 vol:8 drawn:1 imaging:1 vast:1 geometrically:1 cone:12 inverse:1 parameterized:1 place:1 almost:1 arrive:2 extends:1 ble:1 scaling:1 bound:2 pay:1 correspondence:1 adapted:1 occur:3 constraint:1 deficiency:1 infinity:1 fourier:1 argument:1 min:6 span:2 performing:2 department:2 alternate:3 march:2 remain:2 em:1 wi:3 happens:1 jeffreys:1 intuitively:1 invariant:1 remains:2 previously:1 turn:2 count:1 fail:1 mechanism:1 needed:1 know:2 kw0:1 tractable:2 available:2 pursuit:1 apply:1 observe:2 appropriate:1 alternative:3 existence:1 denotes:2 assumes:1 remaining:1 marginalized:1 maintaining:3 giving:1 restrictive:1 scholarship:1 unchanged:1 objective:9 move:1 already:1 realized:3 surrogate:2 unclear:1 gradient:2 majority:1 w0:5 reason:1 relationship:3 prompted:1 providing:1 minimizing:10 ratio:2 equivalently:1 difficult:1 unfortunately:2 potentially:3 nissan:1 stated:3 negative:1 design:1 allowing:3 arc:1 benchmark:1 january:1 situation:1 discovered:1 ucsd:2 david:1 introduced:2 cast:1 required:4 specified:1 pair:1 connection:1 california:2 learned:1 established:1 proceeds:1 below:3 mismatch:1 wi2:1 sparsity:4 explanation:2 dwipf:1 mn:3 brief:2 picture:1 prior:5 geometric:3 marginalizing:1 relative:3 loss:1 expect:1 proportional:2 foundation:1 affine:1 sufficient:1 proxy:1 basin:1 course:1 supported:1 surprisingly:2 free:1 figueiredo:3 allow:2 weaker:1 sparse:20 distributed:3 dimension:1 evaluating:1 resides:1 adaptive:1 san:2 coincide:1 far:2 transaction:4 emphasize:1 preferred:1 ml:1 global:5 kreutz:1 assumed:2 alternatively:1 search:1 why:3 table:2 learn:1 reasonably:2 robust:1 ca:1 expansion:1 necessarily:1 brao:1 domain:2 diag:8 did:1 significance:1 linearly:1 noise:1 hyperparameters:3 profile:2 repeated:2 referred:1 en:1 enormously:1 sparsest:2 candidate:3 lie:1 theorem:9 xt:3 evidence:1 intractable:1 exists:1 effectively:2 conditioned:1 occurring:1 push:1 sparseness:1 sparser:1 chen:1 entropy:2 likely:2 penrose:1 conveniently:1 contained:1 applies:1 goal:2 identity:2 consequently:2 donoho:2 towards:1 twofold:1 price:1 feasible:23 considerable:1 change:1 included:1 specifically:1 except:1 reducing:2 uniformly:2 lemma:1 called:1 total:1 ece:2 exception:1 assessed:1 relevance:2 violated:1 regularizing:1 |
1,901 | 2,727 | Planning for Markov Decision Processes with
Sparse Stochasticity
Maxim Likhachev
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Geoff Gordon
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Sebastian Thrun
Dept. of Computer Science
Stanford University
Stanford CA 94305
[email protected]
Abstract
Planning algorithms designed for deterministic worlds, such as A*
search, usually run much faster than algorithms designed for worlds with
uncertain action outcomes, such as value iteration. Real-world planning
problems often exhibit uncertainty, which forces us to use the slower
algorithms to solve them. Many real-world planning problems exhibit
sparse uncertainty: there are long sequences of deterministic actions
which accomplish tasks like moving sensor platforms into place, interspersed with a small number of sensing actions which have uncertain outcomes. In this paper we describe a new planning algorithm, called MCP
(short for MDP Compression Planning), which combines A* search with
value iteration for solving Stochastic Shortest Path problem in MDPs
with sparse stochasticity. We present experiments which show that MCP
can run substantially faster than competing planners in domains with
sparse uncertainty; these experiments are based on a simulation of a
ground robot cooperating with a helicopter to fill in a partial map and
move to a goal location.
In deterministic planning problems, optimal paths are acyclic: no state is visited more
than once. Because of this property, algorithms like A* search can guarantee that they visit
each state in the state space no more than once. By visiting the states in an appropriate
order, it is possible to ensure that we know the exact value of all of a state?s possible
successors before we visit that state; so, the first time we visit a state we can compute its
correct value.
By contrast, if actions have uncertain outcomes, optimal paths may contain cycles:
some states will be visited two or more times with positive probability. Because of these
cycles, there is no way to order states so that we determine the values of a state?s successors
before we visit the state itself. Instead, the only way to compute state values is to solve a
set of simultaneous equations.
In problems with sparse stochasticity, only a small fraction of all states have uncertain
outcomes. It is these few states that cause all of the cycles: while a deterministic state s
may participate in a cycle, the only way it can do so is if one of its successors has an action
with a stochastic outcome (and only if this stochastic action can lead to a predecessor of s).
In such problems, we would like to build a smaller MDP which contains only states
which are related to stochastic actions. We will call such an MDP a compressed MDP,
and we will call its states distinguished states. We could then run fast algorithms like A*
search to plan paths between distinguished states, and reserve slower algorithms like value
iteration for deciding how to deal with stochastic outcomes.
(a) Segbot
(d) Planning map
(b) Robotic helicopter
(e) Execution simulation
(c) 3D Map
Figure 1: Robot-Helicopter Coordination
There are two problems with such a strategy. First, there can be a large number of states
which are related to stochastic actions, and so it may be impractical to enumerate all of them
and make them all distinguished states; we would prefer instead to distinguish only states
which are likely to be encountered while executing some policy which we are considering.
Second, there can be a large number of ways to get from one distinguished state to another:
edges in the compressed MDP correspond to sequences of actions in the original MDP. If
we knew the values of all of the distinguished states exactly, then we could use A* search
to generate optimal paths between them, but since we do not we cannot.
In this paper, we will describe an algorithm which incrementally builds a compressed
MDP using a sequence of deterministic searches. It adds states and edges to the compressed
MDP only by encountering them along trajectories; so, it never adds irrelevant states or
edges to the compressed MDP. Trajectories are generated by deterministic search, and so
undistinguished states are treated only with fast algorithms. Bellman errors in the values
for distinguished states show us where to try additional trajectories, and help us build the
relevant parts of the compressed MDP as quickly as possible.
1
Robot-Helicopter Coordination Problem
The motivation for our research was the problem of coordinating a ground robot and a
helicopter. The ground robot needs to plan a path from its current location to a goal, but
has only partial knowledge of the surrounding terrain. The helicopter can aid the ground
robot by flying to and sensing places in the map.
Figure 1(a) shows our ground robot, a converted Segway with a SICK laser rangefinder.
Figure 1(b) shows the helicopter, also with a SICK. Figure 1(c) shows a 3D map of the
environment in which the robot operates. The 3D map is post-processed to produce a
discretized 2D environment (Figure 1(d)). Several places in the map are unknown, either
because the robot has not visited them or because their status may have changed (e.g, a
car may occupy a driveway). Such places are shown in Figure 1(d) as white squares. The
elevation of each white square is proportional to the probability that there is an obstacle
there; we assume independence between unknown squares.
The robot must take the unknown locations into account when planning for its route.
It may plan a path through these locations, but it risks having to turn back if its way is
blocked. Alternately, the robot can ask the helicopter to fly to any of these places and sense
them. We assign a cost to running the robot, and a somewhat higher cost to running the
helicopter. The planning task is to minimize the expected overall cost of running the robot
and the helicopter while getting the robot to its destination and the helicopter back to its
home base. Figure 1(e) shows a snapshot of the robot and helicopter executing a policy.
Designing a good policy for the robot and helicopter is a POMDP planning problem;
unfortunately POMDPs are in general difficult to solve (PSPACE-complete [7]). In the
POMDP representation, a state is the position of the robot, the current location of the
helicopter (a point on a line segment from one of the unknown places to another unknown
place or the home base), and the true status of each unknown location. The positions of the
robot and the helicopter are observable, so that the only hidden variables are whether each
unknown place is occupied. The number of states is (# of robot locations)?(# of helicopter
locations)?2# of unknown places . So, the number of states is exponential in the number of
unknown places and therefore quickly becomes very large.
We approach the problem by planning in the belief state space, that is, the space of
probability distributions over world states. This problem is a continuous-state MDP; in this
belief MDP, our state consists of the ground robot?s location, the helicopter?s location, and
a probability of occupancy for each unknown location. We will discretize the continuous
probability variables by breaking the interval [0, 1] into several chunks; so, the number of
belief states is exponential in the number of unknown places, and classical algorithms such
as value iteration are infeasible even on small problems.
If sensors are perfect, this domain is acyclic: after we sense a square we know its true
state forever after. On the other hand, imperfect sensors can lead to cycles: new sensor data
can contradict older sensor data and lead to increased uncertainty. With or without sensor
noise, our belief state MDP differs from general MDPs because its stochastic transitions
are sparse: large portions of the policy (while the robot and helicopter are traveling between unknown locations) are deterministic. The algorithm we propose in this paper takes
advantage of this property of the problem, as we explain in the next section.
2
The Algorithm
Our algorithm can be broken into two levels. At a high level, it constructs a compressed
MDP, denoted M c , which contains only the start, the goal, and some states which are outcomes of stochastic actions. At a lower level, it repeatedly runs deterministic searches to
find new information to put into M c . This information includes newly-discovered stochastic actions and their outcomes; better deterministic paths from one place to another; and
more accurate value estimates similar to Bellman backups. The deterministic searches can
use an admissible heuristic h to focus their effort, so we can often avoid putting many
irrelevant actions into M c .
Because M c will often be much smaller than M , we can afford to run stochastic planning algorithms like value iteration on it. On the other hand, the information we get by
planning in M c will improve the heuristic values that we use in our deterministic searches;
so, the deterministic searches will tend to visit only relevant portions of the state space.
2.1 Constructing and Solving a Compressed MDP
Each action in the compressed MDP represents several consecutive actions in M : if we
see a sequence of states and actions s1 , a1 , s2 , a2 , . . . , sk , ak where a1 through ak?1 are
deterministic but ak is stochastic, then we can represent it in M c with a single action a,
available at s1 , whose outcome distribution is P (s0 | sk , ak ) and whose cost is
c(s1 , a, s0 ) =
k?1
X
c(si , ai , si+1 ) + c(sk , ak , s0 )
i=1
(See Figure 2(a) for an example of such a compressed action.) In addition, if we see a sequence of deterministic actions ending in sgoal , say s1 , a1 , s2 , a2 , . . . , sk , ak , sk+1 = sgoal ,
Pk
we can define a compressed action which goes from s1 to sgoal at cost i=1 c(si , ai , si+1 ).
0
We can label each compressed action that starts at s with (s, s , a) (where a = null if
s0 = sgoal ).
Among all compressed actions starting at s and ending at (s0 , a) there is (at least) one
with lowest expected cost; we will call such an action an optimal compression of (s, s0 , a).
Write Astoch for the set of all pairs (s, a) such that action a when taken from state s has
more than one possible outcome, and include as well (sgoal , null). Write Sstoch for the
states which are possible outcomes of the actions in Astoch , and include sstart as well. If we
include in our compressed MDP an optimal compression of (s, s0 , a) for every s ? Sstoch
and every (s0 , a) ? Astoch , the result is what we call the full compressed MDP; an example
is in Figure 2(b).
If we solve the full compressed MDP, the value of each state will be the same as the
value of the corresponding state in M . However, we do not need to do that much work:
(a) action compression
(b) full MDP compression
(c) incremental MDP compression
Figure 2: MDP compression
Main()
01 initialize M c with sstart and sgoal and set their v-values to 0;
02 while (?s ? M c s.t. RHS(s) ? v(s) > ? and s belongs to the current greedy policy)
03 select spivot to be any such state s;
04 [v; vlim ] = Search(spivot );
05 v(spivot ) = v;
06 set the cost c(spivot , a
?, sgoal ) of the limit action a
? from spivot to vlim ;
07 optionally run some algorithm satisfying req. A for a bounded amount of time to improve the value function in M c ;
Figure 3: MCP main loop
many states and actions in the full compressed MDP are irrelevant since they do not appear
in the optimal policy from sstart to sgoal . So, the goal of the MCP algorithm will be to
construct only the relevant part of the compressed MDP by building M c incrementally.
Figure 2(c) shows the incremental construction of a compressed MDP which contains all
of the stochastic states and actions along an optimal policy in M .
The pseudocode for MCP is given in Figure 3. It begins by initializing M c to contain
only sstart and sgoal , and it sets v(sstart ) = v(sgoal ) = 0. It maintains the invariant that
0 ? v(s) ? v ? (s) for all s. On each iteration, MCP looks at the Bellman error of each of
the states in M c . The Bellman error is v(s) ? RHS(s), where
RHS(s) = min RHS(s, a)
a?A(s)
RHS(s, a) = Es0 ?succ(s,a) (c(s, a, s0 ) + v(s0 ))
By convention the min of an empty set is ?, so an s which does not have any compressed
actions yet is considered to have infinite RHS.
MCP selects a state with negative Bellman error, spivot , and starts a search at that state.
(We note that there exist many possible ways to select spivot ; for example, we can choose
the state with the largest negative Bellman error, or the largest error when weighted by state
visitation probabilities in the best policy in M c .) The goal of this search is to find a new
compressed action a such that its RHS-value can provide a new lower bound on v ? (spivot ).
This action can either decrease the current RHS(spivot ) (if a seems to be a better action in
terms of the current v-values of action outcomes) or prove that the current RHS(spivot ) is
valid. Since v(s0 ) ? v ? (s0 ), one way to guarantee that RHS(spivot , a) ? v ? (spivot ) is
to compute an optimal compression of (spivot , s, a) for all s, a, then choose the one with
the smallest RHS. A more sophisticated strategy is to use an A? search with appropriate
safeguards to make sure we never overestimate the value of a stochastic action. MCP,
however, uses a modified A? search which we will describe in the next section.
As the search finds new compressed actions, it adds them and their outcomes to M c .
It is allowed to initialize newly-added states to any admissible values. When the search
returns, MCP sets v(spivot ) to the returned value. This value is at least as large as
RHS(spivot ). Consequently, Bellman error for spivot becomes non-negative.
In addition to the compressed action and the updated value, the search algorithm returns
a ?limit value? vlim (spivot ). These limit values allow MCP to run a standard MDP planning
algorithm on M c to improve its v(s) estimates. MCP can use any planning algorithm
which guarantees that, for any s, it will not lower v(s) and will not increase v(s) beyond
the smaller of vlim (s) and RHS(s) (Requirement A). For example, we could insert a fake
?limit action? into M c which goes directly from spivot to sgoal at cost vlim (spivot ) (as we
do on line 06 in Figure 3), then run value iteration for a fixed amount of time, selecting for
each backup a state with negative Bellman error.
After updating M c from the result of the search and any optional planning, MCP begins
again by looking for another state with a negative Bellman error. It repeats this process
until there are no negative Bellman errors larger than ?. For small enough ?, this property
guarantees that we will be able to find a good policy (see section 2.3).
2.2
Searching the MDP Efficiently
The top level algorithm (Figure 3) repeatedly invokes a search method for finding trajectories from spivot to sgoal . In order for the overall algorithm to work correctly, there are
several properties that the search must satisfy. First, the estimate v that search returns for
the expected cost of spivot should always be admissible. That is, 0 ? v ? v ? (spivot )
(Property 1). Second, the estimate v should be no less than the one-step lookahead value
of spivot in M c . That is, v ? RHS(spivot ) (Property 2). This property ensures that search
either increases the value of spivot or finds additional (or improved) compressed actions.
The third and final property is for the vlim value, and it is only important if MCP uses its
optional planning step (line 07). The property is that v ? vlim ? v ? (spivot ) (Property 3).
Here v ? (spivot ) denotes the minimum expected cost of starting at spivot , picking a compressed action not in M c , and acting optimally from then on. (Note that v ? can be larger
than v ? if the optimal compressed action is already part of M c .) Property 3 uses v ? rather
than v ? since the latter is not known while it is possible to compute a lower bound on the
former efficiently (see below).
One could adapt A* search to satisfy at least Properties 1 and 2 by assuming that we can
control the outcome of stochastic actions. However, this sort of search is highly optimistic
and can bias the search towards improbable trajectories. Also, it can only use heuristics
which are even more optimistic than it is: that is, h must be admissible with respect to the
optimistic assumption of controlled outcomes.
We therefore present a version of A*, called MCP-search (Figure 4), that is more efficient for our purposes. MCP-search finds the correct expected value for the first stochastic action it encounters on any given trajectory, and is therefore far less optimistic. And,
MCP-search only requires heuristic values to be admissible with respect to v ? values,
h(s) ? v ? (s). Finally, MCP-search speeds up repetitive searches by improving heuristic values based on previous searches.
A* maintains a priority queue, OPEN, of states which it plans to expand. The OPEN
queue is sorted by f (s) = g(s)+h(s), so that A* always expands next a state which appears
to be on the shortest path from start to goal. During each expansion a state s is removed
from OPEN and all the g-values of s?s successors are updated; if g(s0 ) is decreased for some
state s0 , A* inserts s0 into OPEN. A* terminates as soon as the goal state is expanded. We
use the variant of A* with pathmax [5] to use efficiently heuristics that do not satisfy the
triangle inequality.
MCP is similar to A? , but the OPEN list can also contain state-action pairs {s, a} where
a is a stochastic action (line 31). Plain states are represented in OPEN as {s, null}. Just
ImproveHeuristic(s)
01 if s ? M c then h(s) = max(h(s), v(s));
02 improve heuristic h(s) further if possible using f best and g(s) from previous iterations;
procedure fvalue({s, a})
03 if s = null return ?;
04 else if a = null return g(s) + h(s);
05 else return g(s) + max(h(s), Es0 ?Succ(s,a) {c(s, a, s0 ) + h(s0 )});
CheckInitialize(s)
06 if s was accessed last in some previous search iteration
07 ImproveHeuristic(s);
08 if s was not yet initialized in the current search iteration
09 g(s) = ?;
InsertUpdateCompAction(spivot , s, a)
10 reconstruct the path from spivot to s;
11 insert compressed action (spivot , s, a) into A(spivot ) (or update the cost if a cheaper path was found)
12 for each outcome u of a that was not in M c previously
13 set v(u) to h(u) or any other value less than or equal to v ? (u);
14 set the cost c(u, a
?, sgoal ) of the limit action a
? from u to v(u);
procedure Search(spivot )
15 CheckInitialize(sgoal ), CheckInitialize(spivot );
16 g(spivot ) = 0;
17 OPEN = {{spivot , null}};
18 {sbest , abest } = {null, null}, f best = ?;
19 while(g(sgoal ) > min{s,a}?OPEN (fvalue({s, a})) AND f best + ? > min{s,a}?OPEN (fvalue({s, a})))
20 remove {s, a} with the smallest fvalue({s, a}) from OPEN breaking ties towards the pairs with a = null;
21 if a = null
//expand state s
22
for each s0 ? Succ(s)
0
23
CheckInitialize(s );
24
for each deterministic a0 ? A(s)
25
s0 = Succ(s, a0 );
26
h(s0 ) = max(h(s0 ), h(s) ? c(s, a0 , s0 ));
27
if g(s0 ) > g(s) + c(s, a0 , s0 )
28
g(s0 ) = g(s) + c(s, a0 , s0 );
29
insert/update {s0 , null} into OPEN with fvalue({s0 , null});
30
for each stochastic a0 ? A(s)
31
insert/update {s, a0 } into OPEN with fvalue({s, a0 });
32 else
//encode stochastic action a from state s as a compressed action from spivot
33
InsertUpdateCompAction(spivot , s, a);
34
if f best > fvalue({s, a}) then {sbest , abest } = {s, a}, f best = fvalue({s, a});
35 if (g(sgoal ) ? min{s,a}?OPEN (fvalue({s, a})) AND OPEN 6= ?)
36 reconstruct the path from spivot to sgoal ;
37 update/insert into A(spivot ) a deterministic action a leading to sgoal ;
38 if f best ? g(sgoal ) then {sbest , abest } = {sgoal , null}, f best = g(sgoal );
39 return [f best; min{s,a}?OPEN (fvalue({s, a}))];
Figure 4: MCP-search Algorithm
like A*, MCP-search expands elements in the order of increasing f -values, but it breaks
ties towards elements encoding plain states (line 20). The f -value of {s, a} is defined
as g(s) + max[h(s), Es0 ?Succ(s,a) (c(s, a, s0 ) + h(s0 ))] (line 05). This f -value is a lower
bound on the cost of a policy that goes from sstart to sgoal by first executing a series of
deterministic actions until action a is executed from state s. This bound is as tight as
possible given our heuristic values.
State expansion (lines 21-31) is very similar to A? . When the search removes from
OPEN a state-action pair {s, a} with a 6= null, it adds a compressed action to M c (line
33). It also adds a compressed action if there is an optimal deterministic path to sgoal
(line 37). f best tracks the minimum f -value of all the compressed actions found. As a
result, f best ? v ? (spivot ) and is used as a new estimate for v(spivot ). The limit value
vlim (spivot ) is obtained by continuing the search until the minimum f -value of elements
in OPEN approaches f best + ? for some ? ? 0 (line 19). This minimum f -value then
provides a lower bound on v ? (spivot ).
To speed up repetitive searches, MCP-search improves the heuristic of every state that
it encounters for the first time in the current search iteration (lines 01 and 02). Line 01
uses the fact that v(s) from M c is a lower bound on v ? (s). Line 02 uses the fact that
f best ? g(s) is a lower bound on v ? (s) at the end of each previous call to Search; for more
details see [4].
2.3 Theoretical Properties of the Algorithm
We now present several theorems about our algorithm. The proofs of these and other theorems can be found in [4]. The first theorem states the main properties of MCP-search.
Theorem 1 The search function terminates and the following holds for the values it returns:
(a) if sbest 6= null then v ? (spivot ) ? f best ? E{c(spivot , abest , s0 ) + v(s0 )}
(b) if sbest = null then v ? (spivot ) = f best = ?
(c) f best ? min{s,a}?OPEN (fvalue({s, a})) ? v ? (spivot ).
If neither sgoal nor any state-action pairs were expanded, then sbest = null and (b) says
that there is no policy from spivot that has a finite expected cost. Using the above theorem
it is easy to show that MCP-search satisfies Properties 1, 2 and 3, considering that f best is
returned as variable v and min{s,a}?OPEN (fvalue({s, a})) is returned as variable vlim in
the main loop of the MCP algorithm (Figure 3). Property 1 follows directly from (a) and
(b) and the fact that costs are strictly positive and v-values are non-negative. Property 2
also follows trivially from (a) and (b). Property 3 follows from (c). Given these properties
c
the next theorem states the correctness of the outer MCP algorithm (in the theorem ?greedy
denotes a greedy policy that always chooses an action that looks best based on its cost and
the v-values of its immediate successors).
Theorem 2 Given a deterministic search algorithm which satisfies Properties 1?3, the
c
MCP algorithm will terminate. Upon termination, for every state s ? M c ? ?greedy
we
?
have RHS(s) ? ? ? v(s) ? v (s).
Given the above theorem one can show that for 0 ? ? < cmin (where cmin is the
c
smallest expected action cost in our MDP) the expected cost of executing ?greedy
from
cmin
?
sstart is at most cmin ?? v (sstart ). Picking ? ? cmin is not guaranteed to result in a proper
policy, even though Theorem 2 continues to hold.
3
Experimental Study
We have evaluated the MCP algorithm on the robot-helicopter coordination problem described in section 1. To obtain an admissible heuristic, we first compute a value function
for every possible configuration of obstacles. Then we weight the value functions by the
probabilities of their obstacle configurations, sum them, and add the cost of moving the
helicopter back to its base if it is not already there. This procedure results in optimistic cost
estimates because it pretends that the robot will find out the obstacle locations immediately
instead of having to wait to observe them.
The results of our experiments are shown in Figure 5. We have compared MCP against
three algorithms: RTDP [1], LAO* [2] and value iteration on reachable states (VI). RTDP
can cope with large size MDPs by focussing its planning efforts along simulated execution trajectories. LAO* uses heuristics to prune away irrelevant states, then repeatedly
performs dynamic programming on the states in its current partial policy. We have implemented LAO* so that it reduces to AO* [6] when environments are acyclic (e.g., the
robot-helicopter problem with perfect sensing). VI was only able to run on the problems
with perfect sensing since the number of reachable states was too large for the others.
The results support the claim that MCP can solve large problems with sparse stochasticity. For the problem with perfect sensing, on average MCP was able to plan 9.5 times
faster than LAO*, 7.5 times faster than RTDP, and 8.5 times faster than VI. On average for
these problems, MCP computed values for 58633 states while M c grew to 396 states, and
MCP encountered 3740 stochastic transitions (to give a sense of the degree of stochasticity). The main cost of MCP was in its deterministic search subroutine; this fact suggests
that we might benefit from anytime search techniques such as ARA* [3].
The results for the problems with imperfect sensing show that, as the number and density of uncertain outcomes increases, the advantage of MCP decreases. For these problems
MCP was able to solve environments 10.2 times faster than LAO* but only 2.2 times faster
than RTDP. On average MCP computed values for 127,442 states, while the size of M c
was 3,713 states, and 24,052 stochastic transitions were encountered.
Figure 5: Experimental results. The top row: the robot-helicopter coordination problem with perfect
sensors. The bottom row: the robot-helicopter coordination problem with sensor noise. Left column:
running times (in secs) for each algorithm grouped by environments. Middle column: the number
of backups for each algorithm grouped by environments. Right column: an estimate of the expected
cost of an optimal policy (v(sstart )) vs. running time (in secs) for experiment (k) in the top row and
experiment (e) in the bottom row. Algorithms in the bar plots (left to right): MCP, LAO*, RTDP and
VI (VI is only shown for problems with perfect sensing). The characteristics of the environments are
given in the second and third rows under each of the bar plot. The second row indicates how many
cells the 2D plane is discretized into, and the third row indicates the number of initially unknown
cells in the environment.
4
Discussion
The MCP algorithm incrementally builds a compressed MDP using a sequence of deterministic searches. Our experimental results suggest that MCP is advantageous for problems
with sparse stochasticity. In particular, MCP has allowed us to scale to larger environments
than were otherwise possible for the robot-helicopter coordination problem.
Acknowledgements
This research was supported by DARPA?s MARS program. All conclusions are our own.
References
[1] S. Bradtke A. Barto and S. Singh. Learning to act using real-time dynamic programming. Artificial Intelligence, 72:81?138, 1995.
[2] E. Hansen and S. Zilberstein. LAO*: A heuristic search algorithm that finds solutions
with loops. Artificial Intelligence, 129:35?62, 2001.
[3] M. Likhachev, G. Gordon, and S. Thrun. ARA*: Anytime A* with provable bounds
on sub-optimality. In Advances in Neural Information Processing Systems (NIPS) 16.
Cambridge, MA: MIT Press, 2003.
[4] M. Likhachev, G. Gordon, and S. Thrun. MCP: Formal analysis. Technical report,
Carnegie Mellon University, Pittsburgh, PA, 2004.
[5] L. Mero. A heuristic search algorithm with modifiable estimate. Artificial Intelligence,
23:13?27, 1984.
[6] N. Nilsson. Principles of Artificial Intelligence. Palo Alto, CA: Tioga Publishing,
1980.
[7] C. H. Papadimitriou and J. N. Tsitsiklis. The complexity of Markov decision processses. Mathematics of Operations Research, 12(3):441?450, 1987.
| 2727 |@word middle:1 version:1 compression:8 seems:1 advantageous:1 open:19 termination:1 simulation:2 configuration:2 contains:3 series:1 selecting:1 current:9 si:4 yet:2 must:3 remove:2 designed:2 plot:2 update:4 v:1 greedy:5 intelligence:4 plane:1 short:1 segway:1 provides:1 location:13 accessed:1 along:3 predecessor:1 consists:1 prove:1 combine:1 expected:9 planning:19 nor:1 discretized:2 bellman:10 ara:2 es0:3 considering:2 increasing:1 becomes:2 begin:2 bounded:1 alto:1 null:17 lowest:1 what:1 substantially:1 rtdp:5 finding:1 impractical:1 guarantee:4 every:5 expands:2 act:1 tie:2 exactly:1 control:1 appear:1 overestimate:1 before:2 positive:2 limit:6 encoding:1 ak:6 path:13 abest:4 might:1 suggests:1 differs:1 procedure:3 fvalue:12 wait:1 suggest:1 get:2 cannot:1 put:1 risk:1 deterministic:21 map:7 go:3 starting:2 pomdp:2 immediately:1 fill:1 searching:1 updated:2 construction:1 exact:1 programming:2 us:6 designing:1 pa:3 element:3 satisfying:1 updating:1 continues:1 bottom:2 fly:1 initializing:1 ensures:1 cycle:5 decrease:2 removed:1 environment:9 broken:1 complexity:1 dynamic:2 singh:1 solving:2 segment:1 tight:1 flying:1 upon:1 req:1 triangle:1 darpa:1 succ:5 geoff:1 represented:1 surrounding:1 laser:1 fast:2 describe:3 artificial:4 outcome:17 whose:2 heuristic:13 stanford:3 solve:6 larger:3 say:2 reconstruct:2 compressed:32 otherwise:1 itself:1 final:1 sequence:6 advantage:2 propose:1 helicopter:24 relevant:3 loop:3 tioga:1 lookahead:1 getting:1 empty:1 requirement:1 produce:1 perfect:6 executing:4 incremental:2 help:1 school:2 implemented:1 c:2 convention:1 correct:2 stochastic:20 successor:5 cmin:5 assign:1 undistinguished:1 ao:1 elevation:1 insert:6 strictly:1 hold:2 considered:1 ground:6 deciding:1 claim:1 reserve:1 consecutive:1 a2:2 smallest:3 purpose:1 label:1 visited:3 coordination:6 hansen:1 palo:1 largest:2 grouped:2 correctness:1 weighted:1 mit:1 sensor:8 always:3 modified:1 rather:1 occupied:1 avoid:1 barto:1 zilberstein:1 encode:1 focus:1 indicates:2 contrast:1 sense:3 a0:8 initially:1 hidden:1 expand:2 subroutine:1 selects:1 overall:2 among:1 denoted:1 plan:5 platform:1 initialize:2 equal:1 once:2 never:2 having:2 construct:2 represents:1 look:2 papadimitriou:1 others:1 report:1 gordon:3 few:1 cheaper:1 highly:1 accurate:1 edge:3 partial:3 improbable:1 continuing:1 initialized:1 theoretical:1 uncertain:5 increased:1 column:3 obstacle:4 cost:22 too:1 optimally:1 accomplish:1 chooses:1 chunk:1 density:1 destination:1 picking:2 safeguard:1 quickly:2 again:1 choose:2 priority:1 leading:1 return:8 account:1 converted:1 sec:2 includes:1 satisfy:3 vi:5 try:1 break:1 optimistic:5 portion:2 start:4 sort:1 maintains:2 minimize:1 square:4 characteristic:1 efficiently:3 correspond:1 trajectory:7 pomdps:1 simultaneous:1 explain:1 sebastian:1 against:1 proof:1 newly:2 ask:1 knowledge:1 car:1 improves:1 anytime:2 sophisticated:1 back:3 appears:1 higher:1 improved:1 evaluated:1 though:1 mar:1 just:1 until:3 traveling:1 hand:2 incrementally:3 mdp:29 building:1 contain:3 true:2 former:1 sbest:6 deal:1 white:2 during:1 complete:1 performs:1 bradtke:1 pseudocode:1 interspersed:1 mellon:3 blocked:1 cambridge:1 ai:2 trivially:1 mathematics:1 stochasticity:6 reachable:2 moving:2 robot:27 encountering:1 base:3 add:6 sick:2 own:1 irrelevant:4 belongs:1 route:1 inequality:1 minimum:4 additional:2 somewhat:1 prune:1 determine:1 shortest:2 focussing:1 full:4 reduces:1 technical:1 faster:7 adapt:1 long:1 post:1 visit:5 a1:3 controlled:1 variant:1 cmu:2 iteration:12 represent:1 repetitive:2 pspace:1 cell:2 addition:2 interval:1 decreased:1 else:3 sure:1 tend:1 call:5 enough:1 easy:1 independence:1 competing:1 imperfect:2 whether:1 effort:2 likhachev:3 queue:2 returned:3 cause:1 afford:1 action:58 repeatedly:3 enumerate:1 fake:1 amount:2 sgoal:24 processed:1 generate:1 occupy:1 exist:1 coordinating:1 sstart:9 correctly:1 track:1 modifiable:1 carnegie:3 write:2 visitation:1 putting:1 neither:1 rangefinder:1 cooperating:1 fraction:1 sum:1 run:9 uncertainty:4 place:12 planner:1 home:2 decision:2 prefer:1 bound:8 guaranteed:1 distinguish:1 encountered:3 speed:2 min:8 optimality:1 expanded:2 smaller:3 terminates:2 s1:5 nilsson:1 invariant:1 taken:1 mcp:41 equation:1 previously:1 turn:1 know:2 end:1 available:1 operation:1 observe:1 away:1 appropriate:2 distinguished:6 encounter:2 slower:2 original:1 top:3 running:5 ensure:1 include:3 denotes:2 publishing:1 pretend:1 invokes:1 build:4 classical:1 move:1 added:1 already:2 strategy:2 visiting:1 exhibit:2 thrun:4 simulated:1 outer:1 participate:1 provable:1 assuming:1 optionally:1 difficult:1 unfortunately:1 executed:1 negative:7 proper:1 policy:15 unknown:13 discretize:1 snapshot:1 markov:2 finite:1 optional:2 immediate:1 grew:1 looking:1 discovered:1 pair:5 alternately:1 nip:1 beyond:1 able:4 bar:2 usually:1 below:1 program:1 max:4 belief:4 treated:1 force:1 occupancy:1 older:1 improve:4 mdps:3 lao:7 acknowledgement:1 proportional:1 acyclic:3 degree:1 s0:32 principle:1 row:7 changed:1 repeat:1 last:1 soon:1 supported:1 infeasible:1 tsitsiklis:1 bias:1 allow:1 formal:1 sparse:8 benefit:1 plain:2 world:5 transition:3 ending:2 valid:1 far:1 cope:1 observable:1 contradict:1 forever:1 status:2 robotic:1 pittsburgh:3 knew:1 terrain:1 search:53 continuous:2 ggordon:1 sk:5 terminate:1 ca:2 improving:1 expansion:2 constructing:1 domain:2 pk:1 main:5 rh:15 motivation:1 noise:2 backup:3 s2:2 allowed:2 aid:1 sub:1 position:2 exponential:2 breaking:2 third:3 admissible:6 theorem:10 sensing:7 list:1 maxim:2 execution:2 likely:1 satisfies:2 ma:1 goal:7 sorted:1 consequently:1 towards:3 infinite:1 operates:1 acting:1 called:2 experimental:3 select:2 support:1 latter:1 dept:1 |
1,902 | 2,728 | Result Analysis of the NIPS 2003
Feature Selection Challenge
Isabelle Guyon
ClopiNet
Berkeley, CA 94708, USA
[email protected]
Steve Gunn
School of Electronics and Computer Science
University of Southampton, U.K.
[email protected]
Asa Ben Hur
Department of Genome Sciences
University of Washington, USA
[email protected]
Gideon Dror
Department of Computer Science
Academic College of Tel-Aviv-Yaffo, Israel
[email protected]
Abstract
The NIPS 2003 workshops included a feature selection competition organized by the authors. We provided participants with five
datasets from different application domains and called for classification results using a minimal number of features. The competition
took place over a period of 13 weeks and attracted 78 research
groups. Participants were asked to make on-line submissions on
the validation and test sets, with performance on the validation set
being presented immediately to the participant and performance
on the test set presented to the participants at the workshop. In
total 1863 entries were made on the validation sets during the
development period and 135 entries on all test sets for the final
competition. The winners used a combination of Bayesian neural networks with ARD priors and Dirichlet diffusion trees. Other
top entries used a variety of methods for feature selection, which
combined filters and/or wrapper or embedded methods using Random Forests, kernel methods, or neural networks as a classification
engine. The results of the benchmark (including the predictions
made by the participants and the features they selected) and the
scoring software are publicly available. The benchmark is available
at www.nipsfsc.ecs.soton.ac.uk for post-challenge submissions
to stimulate further research.
1
Introduction
Recently, the quality of research in Machine Learning has been raised by the sustained data sharing efforts of the community. Data repositories include the well
known UCI Machine Learning repository [13], and dozens of other sites [10]. Yet,
this has not diminished the importance of organized competitions. In fact, the
proliferation of datasets combined with the creativity of researchers in designing
experiments makes it hardly possible to compare one paper with another [12]. A
number of large conferences have regularly organized competitions (e.g. KDD,
CAMDA, ICDAR, TREC, ICPR, and CASP). The NIPS workshops offer an ideal
forum for organizing such competitions. In 2003, we organized a competition on
the theme of feature selection, the results of which were presented at a workshop
on feature extraction, which attracted 98 participants. We are presently preparing
a book combining tutorial chapters and papers from the proceedings of that workshop [9]. In this paper, we present to the NIPS community a concise summary of
our challenge design and the findings of the result analysis.
2
Benchmark design
We formatted five datasets (Table 1) from various application domains. All datasets
are two-class classification problems. The data were split into three subsets: a
training set, a validation set, and a test set. All three subsets were made available
at the beginning of the benchmark, on September 8, 2003. The class labels for the
validation set and the test set were withheld. The identity of the datasets and of
the features (some of which were random features artificially generated) were kept
secret. The participants could submit prediction results on the validation set and get
their performance results and ranking on-line for a period of 12 weeks. By December
1st , 2003, which marked the end of the development period, the participants had to
turn in their results on the test set. Immediately after that, the validation set labels
were revealed. On December 8th , 2003, the participants could make submissions of
test set predictions, after having trained on both the training and the validation
set. Some details on the benchmark design are provided in this Section.
Challenge format
We gave our benchmark the format of a challenge to stimulate participation. We
made available an automatic web-based system to submit prediction results and
get immediate feed-back, inspired by the system of the NIPS2000 and NIPS2001
unlabelled data competitions [4, 5]. However, unlike what had been done for these
other competitions, we used a ?validation set? to assess performance during the
development period, and a separate ?test set? for final scoring.
During development participants could submit validation results on any of the five
datasets proposed (not necessarily all). Competitors were required to submit results
on all five test sets by the challenge deadline to be included in the final ranking. This
avoided a common problem of ?multiple track? benchmarks in which no conclusion
can be drawn because too few participants enter all tracks.
To promote collaboration between researchers, reduce the level of anxiety, and let
people explore various strategies (e.g. ?pure? methods and ?hybrids?), we allowed
participating groups to submit a total of five final entries on December 1st and five
entries on December 8th .
Our format was very successful: it attracted 78 research groups who competed
for 13 weeks and made (submitted) a total of 1863 entries. Twenty groups were
eligible for being ranked on December 1st (56 submissions1 ), and 16 groups on
December 8th (36 submissions.) The feature selection benchmark web site at
www.nipsfsc.ecs.soton.ac.uk remains available as a resource for researchers in
the feature selection.
1
After imposing a maximum of 5 submissions per group and eliminating some incomplete submissions, there remained 56 eligible submissions out of the 135 received.
Table 1: NIPS 2003 challenge datasets. For each dataset we show the domain
it was taken from, its type (dense, sparse, or sparse binary), the number of features,
the percentage of probes, and the number of examples in the training, validation,
and test sets. All problems are two-class classification problems.
Dataset
Domain
Type
Arcene
Dexter
Dorothea
Gisette
Madelon
Mass Spectrometry
Text classification
Drug discovery
Digit recognition
Artificial
Dense
10000
Sparse
20000
Sparse binary 100000
Dense
5000
Dense
500
#Fe %Pr #Tr #Val #Te
30
50
50
30
96
100
300
800
6000
2000
100
300
350
1000
600
700
2000
800
6500
1800
The challenge datasets
Until the late 90s most published papers on feature selection considered datasets
with less than 40 features2 (see [1, 11] from a 1997 special issue on relevance for
example). The situation has changed considerably in the past few years, and in
the 2003 special issue we edited for JMLR including papers from the proceedings
of the NIPS 2001 workshop [7], most papers explore domains with hundreds to
tens of thousands of variables or features. The applications are driving this effort:
bioinformatics, chemistry (drug design, cheminformatics), text processing, pattern
recognition, speech processing, and machine vision provide machine learning problems in very high dimensional spaces, but often with comparably few examples.
Feature selection is a particular way of tackling the problem of space dimensionality reduction. The necessary computing power to handle large datasets is now
available in simple laptops, so there is a proliferation of solutions proposed for such
feature selection problems. Yet, there does not seem to be an emerging unity of
experimental design and algorithms. We formatted five datasets for the purpose of
benchmarking variable selection algorithms (see Table 1.)
The datasets were chosen to span a variety of domains and difficulties (the input
variables are continuous or binary, sparse or dense; one dataset has unbalanced
classes.) One dataset (Madelon) was artificially constructed to illustrate a particular difficulty: selecting a feature set when no feature is informative by itself.
We chose datasets that had sufficiently many examples to create a large enough
test set to obtain statistically significant results [6]. To prevent researchers familiar
with the datasets to have an advantage, we concealed the identity of the datasets
during the benchmark. We performed several preprocessing and data formatting
steps, which contributed to disguising the origin of the datasets. In particular, we
introduced a number of features called probes. The probes were drawn at random
from a distribution resembling that of the real features, but carrying no information
about the class labels. Such probes have a function in performance assessment: a
good feature selection algorithm should eliminate most of the probes. The details
of data preparation can be found in a technical memorandum [6].
2
In this paper, we do not make a distinction between features and variables. The
benchmark addresses the problem of selecting input variables. Those may actually be
features derived from the original variables through preprocessing.
Table 2: We show the top entries sorted by their score (times 100), the balanced
error rate in percent (BER) and corresponding rank in parenthesis, the area under
the ROC curve times 100 (AUC) and corresponding rank in parenthesis, the percentage of features used (Fe), and the percentage of probes in the features selected
(Pr).
(a) December 1st 2003 challenge results.
Method (Team)
Score
BER
AUC
BayesNN-DFT (Neal/Zhang)
BayesNN-DFT (Neal/Zhang)
BayesNN-small (Neal)
BayesNN-large (Neal)
RF+RLSC (Torkkola/Tuv)
final2 (Chen)
SVMBased3 (Zhili/Li)
SVMBased4 (Zhili/Li)
final1 (Chen)
transSVM2 (Zhili)
BayesNN-E (Neal)
Collection2 (Saffari)
Collection1 (Saffari)
88.0
86.2
68.7
59.6
59.3
52.0
41.8
41.1
40.4
36.0
29.5
28.0
20.7
6.84
6.87
8.20
8.21
9.07
9.31
9.21
9.40
10.38
9.60
8.43
10.03
10.06
(1)
(2)
(3)
(4)
(7)
(9)
(8)
(10)
(23)
(13)
(5)
(20)
(21)
97.22
97.21
96.12
96.36
90.93
90.69
93.60
93.41
89.62
93.21
96.30
89.97
89.94
(1)
(2)
(5)
(3)
(29)
(31)
(16)
(18)
(34)
(20)
(4)
(32)
(33)
(b) December 8th 2003 challenge results.
Method (Team)
Score
BER
AUC
BayesNN-DFT (Neal/Zhang)
BayesNN-large (Neal)
BayesNN-small (Neal)
final 2-3 (Chen)
BayesNN-large (Neal)
final2-2 (Chen)
Ghostminer1 (Ghostminer)
RF+RLSC (Torkkola/Tuv)
Ghostminer2 (Ghostminer)
RF+RLSC (Torkkola/Tuv)
FS+SVM (Lal)
Ghostminer3 (Ghostminer)
CBAMethod3E (CBAGroup)
CBAMethod3E (CBAGroup)
Nameless (Navot/Bachrach)
71.4
66.3
61.1
49.1
49.1
40.0
37.1
35.4
35.4
34.3
31.4
26.3
21.1
21.1
12.0
6.48
7.27
7.13
7.91
7.83
8.80
7.89
8.04
7.86
8.23
8.99
8.24
8.14
8.14
7.78
(1)
(3)
(2)
(8)
(5)
(17)
(7)
(9)
(6)
(12)
(19)
(13)
(10)
(11)
(4)
97.20
96.98
97.08
91.45
96.78
89.84
92.11
91.96
92.14
91.77
91.01
91.76
96.62
96.62
96.43
(1)
(3)
(2)
(25)
(4)
(29)
(21)
(22)
(20)
(23)
(27)
(24)
(5)
(6)
(9)
Fe
Pr
80.3
80.3
4.7
60.3
22.5
24.9
29.5
29.5
6.2
29.5
96.8
7.7
32.3
47.8
47.8
2.9
28.5
17.5
12.0
21.7
21.7
6.1
21.7
56.7
10.6
25.5
Fe
Pr
80.3
60.3
4.7
24.9
60.3
24.6
80.6
22.4
80.6
22.4
20.9
80.6
12.8
12.8
32.3
47.8
28.5
2.9
9.9
28.5
6.7
36.1
17.5
36.1
17.5
17.3
36.1
0.1
0.1
16.2
Performance assessment
Final submissions qualified for scoring if they included the class predictions for
training, validation, and test sets for all five tasks proposed, and the list of features
used. Optionally, classification confidence values could be provided. Performance
was assessed using several metrics:
? BER: The balanced error rate, that is the average of the error rate of the
positive class and the error rate of the negative class. This metric was used
because some datasets (particularly Dorothea) are unbalanced.
? AUC: Area under the ROC curve. The ROC curve is obtained by varying a
threshold on the discriminant values (outputs) of the classifier. The curve
represents the fraction of true positive as a function of the fraction of false
negative. For classifiers with binary outputs, BER=1-AUC.
? Ffeat: Fraction of features selected.
? Fprobe: Fraction of probes found in the feature set selected.
We ranked the participants with the test set results using a score combining BER,
Ffeat and Fprobe. Briefly: We used the McNemar test to determine whether classifier A is better than classifier B according to the BER with 5% risk yielding to a
score of 1 (better), 0 (don?t know) or 1 (worse). Ties (zero score) were broken with
Ffeat (if the relative difference in Ffeat was larger than 5%.) Remaining ties were
broken with Fprobe. The overall score for each dataset is the sum of the pairwise
comparison scores (normalized by the maximum achievable score, that is the number of submissions minus one.) The code is provided on the challenge website. The
scores were averaged over the five datasets. Our scoring method favors accuracy
over feature set compactness.
Our benchmark design could not prevent participants from ?cheating? in the following way. An entrant could ?declare? a smaller feature subset than the one used
to make predictions. To deter participants from cheating, we warned them that we
would be performing a stage of verification. We performed several checks as detailed
in [9] and did not find any entry that should be suspected of being fraudulent.
3
Challenge results
The overall scores of the best entries are shown in Table 2. The main features of
the methods of the participants listed in that table are summarized in Table 3. The
analysis of this section also includes the survey of ten more top ranking participants.
Winners
The winners of the benchmark (both December 1st and 8th ) are Radford Neal and
Jianguo Zhang, with a combination of Bayesian neural networks [14] and Dirichlet diffusion trees [15]. Their achievements are significant since they win on the
overall ranking with respect to our scoring metric, but also with respect to the balanced error rate (BER), the area under the ROC curve (AUC), and they have the
smallest feature set among the top entries that have performance not statistically
significantly worse. They are also the top entrants December 1st for Arcene and
Dexter and December 1st and 8th for Dorothea.
Two aspects of their approach were the same for all data sets:
? They reduced the number of features used for classification to no more
than a few hundred, either by selecting a subset of features using simple
univariate significance tests, or by Principal Component Analysis (PCA)
performed on all available labeled and unlabeled data.
? They then applied a classification method based on Bayesian learning, using
an Automatic Relevance Determination (ARD) prior that allows the model
to determine which of these features are most relevant.
Bayesian neural network learning with computation by Markov chain Monte Carlo
(MCMC) is a well developed technology [14]. Dirichlet diffusion trees are a new
Bayesian approach to density modeling and hierarchical clustering. As allowed by
the challenge rules, the winners constructed these trees using both the training
data and the unlabeled data in the validation and test sets. Classification was then
performed with the k-nearest neighbors method, using the metric induced by the
tree.
Table 3: Methods employed by the challengers. The classifiers are grouped
in four categories: N - neural network, K - SVM or other kernel method, T tree classifiers (none found in the top ranking methods), O - other. The feature
selection engines (Fengine) are grouped in three categories: C - single variable
criteria including correlation coefficients, T - tree classifiers or RF used as a filter
E - Wrapper or embedded methods. The search methods are identified by: E embedded, R - feature ranking, B - backward elimination, S - more elaborated
search.
Team
Classifier Fengine Fsearch Ensemble Transduction
Neal/Zhang
Torkkola/Tuv
Chen/Lin
Zhili/Li
Saffari
Ghostminer
Lal et al
CBAGroup
Bachrach/Navot
N/O
K
K
K
N
K
K
K
K/O
C/E
T
C/T/E
C/E
C
C/T
C
C
E
E
R
R/E
E
R
B
R
R
S
Yes
Yes
No
No
Yes
Yes
No
No
No
Yes
No
No
Yes
No
No
No
No
No
Other methods employed
We group methods into coarse categories to draw useful conclusions. Our findings
include:
Feature selection The winners and several top ranking challengers use a combination of filters and embedded methods3 . Several high ranking participants
obtain good results using only filters, even simple correlation coefficients.
The second best entrants use Random Forests, an ensemble of tree classifiers, to perform feature selection [3].4 Search strategies are generally unsophisticated (simple feature ranking, forward selection or backward elimination.) Only 2 out of 19 in our survey used a more sophisticated search
strategy. The selection criterion is usually based on cross-validation. A
majority use K-fold, with K between 3 and 10. Only one group used ?random probes? purposely introduced to track the fraction of falsely selected
features. One group used the area under the ROC curve computed on the
training set.
Classifier Kernel methods [16] are most popular: 7/9 in Table 3 and 12/19 in
the survey. Of the 12 kernel methods employed, 8 are SVMs. In spite of
the high risk of overfitting, 7 of the 9 top groups using kernel methods
found that Gaussian kernels gave them better results than the linear kernel
on Arcene, Dexter, Dorothea, or Gisette (for Madelon all best
ranking groups used a Gaussian kernel.)
Ensemble methods Some groups relied on a committee of classifiers to make the
final decision. The techniques to build such committee include sampling
3
We distinguish embedded methods that have a feature selection mechanism built into
the learning algorithm from wrappers, which perform feature selection by using the classifier as a black box.
4
Random Forests (RF) are classification techniques with an embedded feature selection
mechanism. The participants used the features generated by RF, but did not use RF for
classification.
from the posterior distribution using MCMC [14] and bagging [2]. Most
groups that used ensemble methods reported improved accuracy.
Transduction Since all the datasets were provided since the beginning of the
benchmark (validation and test set deprived of their class labels), it was
possible to make use of the unlabelled data as part of learning (sometimes
referred to as transduction [17]). Only two groups took advantage of that,
including the winners.
Preprocessing Centering and scaling the features was the most common preprocessing used. Some methods required discretization of the features. One
group normalized the patterns. Principal Componant Analysis (PCA) was
used by several groups, including the winners, as a means of constructing
features.
4
Conclusions and future work
The challenge demonstrated both that feature selection can be performed
effectively and that eliminating meaningless features is not critical to
achieve good classification performance. By design, our datasets include many
irrelevant ?distracters? features, called ?probes?. In contrast with redundant features, which may not be needed to improve accuracy but carry information, those
distracters are ?pure noise?. It is surprising that some of the best entries use all
the features. Still, there is always another entry close in performance, which uses
only a small fraction of the original features.
The challenge outlined the power of filter methods. For many years, filter methods have dominated feature selection for computational reasons. It was understood
that wrapper and embedded methods are more powerful, but too computationally
expensive. Some of the top ranking entries use one or several filters as their only
selection strategy. A filter as simple as the Pearson correlation coefficient proves
to be very effective, even though it does not remove feature redundancy and therefore yields unnecessarily large feature subsets. Other entrants combined filters and
embedded methods to further reduce the feature set and eliminate redundancies.
Another important outcome is that non-linear classifiers do not necessarily overfit.
Several challenge datasets included a very large number of features (up to 100,000)
and only a few hundred examples. Therefore, only methods that avoid overfitting
can succeed in such adverse aspect ratios. Not surprisingly, the winning entries
use as classifies either ensemble methods or strongly regularized classifiers. More
surprisingly, non-linear classifiers often outperform linear classifiers. Hence, with
adequate regularization, non-linear classifiers do not overfit the data, even when the
number of features exceeds the number of examples by orders of magnitude.
Principal Component Analysis was successfully used by several researchers to reduce
the dimension of input space down to a few hundred features, without any knowledge
of the class labels. This was not harmful to the prediction performances and greatly
reduced the computational load of the learning machines.
The analysis of the challenge results revealed that hyperparameter selection may
have played an important role in winning the challenge. Indeed, several groups
were using the same classifier (e.g. an SVM) and reported significantly different
results. We have started laying the basis of a new benchmark on the theme of
model selection and hyperparameter selection [8].
Acknowledgments
We are very thankful to the institutions that have contributed data: the National
Cancer Institute (NCI), the Eastern Virginia Medical School (EVMS), the National
Institute of Standards and Technology (NIST), DuPont Pharmaceuticals Research
Laboratories, Reuters Ltd., and the Carnegie Group, Inc. We also thank the people
who formatted the data and made them available: Thorsten Joachims, Yann Le
Cun, and the KDD Cup 2001 organizers. We thank Olivier Chapelle for providing
ideas and corrections. The workshop co-organizers and advisors Masoud Nikravesh,
Kristin Bennett, Richard Caruana, and Andr?e Elisseeff, are gratefully acknowledged
for their help, and advice, in particular with result dissemination.
References
[1] A. Blum and P. Langley. Selection of relevant features and examples in machine
learning. Artificial Intelligence, 97(1-2):245?271, December 1997.
[2] Leo Breiman. Bagging predictors. Machine Learning, 24(2):123?140, 1996.
[3] Leo Breiman. Random forests. Machine Learning, 45(1):5?32, 2001.
[4] S. Kremer, et al.
NIPS 2000 unlabeled data competition.
http://q.cis.uoguelph.ca/~skremer/Research/NIPS2000/, 2000.
[5] S. Kremer, et al.
NIPS 2001 unlabeled data competition.
http://q.cis.uoguelph.ca/~skremer/Research/NIPS2001/, 2001.
[6] I. Guyon. Design of experiments of the NIPS 2003 variable selection benchmark.
http://www.nipsfsc.ecs.soton.ac.uk/papers/Datasets.pdf, 2003.
[7] I. Guyon and A. Elisseeff. An introduction to variable and feature selection.
JMLR, 3:1157?1182, March 2003.
[8] I. Guyon and S. Gunn. Model selection and ensemble methods challenge in
preparation http://clopinet.com/isabelle/projects/modelselect.
[9] I. Guyon, S. Gunn, M. Nikravesh, and L. Zadeh, Editors.
Feature Extraction, Foundations and Applications.
Springer-Verlag,
http://clopinet.com/isabelle/Projects/NIPS2003/call-for-papers.html,
In
preparation.
See
also
on-line
supplementary
material:
http://clopinet.com/isabelle/Projects/NIPS2003/analysis.html.
[10] D. Kazakov, L. Popelinsky, and O. Stepankova. MLnet machine learning network on-line information service. In http://www.mlnet.org.
[11] R. Kohavi and G. John. Wrappers for feature selection. Artificial Intelligence,
97(1-2):273?324, December 1997.
[12] D. LaLoudouana and M. Bonouliqui Tarare. Data set selection. In NIPS02
http://www.jmlg.org/papers/laloudouana03.pdf, 2002.
[13] P. M. Murphy and D. W. Aha. UCI repository of machine learning databases.
In http://www.ics.uci.edu/~mlearn/MLRepository.html, 1994.
[14] R. M. Neal. Bayesian Learning for Neural Networks. Number 118 in Lecture
Notes in Statistics. Springer-Verlag, New York, 1996.
[15] R. M. Neal. Defining priors for distributions using dirichlet diffusion trees.
Technical Report 0104, Dept. of Statistics, University of Toronto, March 2001.
[16] B. Schoelkopf and A. Smola. Learning with Kernels ? Support Vector Machines,
Regularization, Optimization and Beyond. MIT Press, Cambridge MA, 2002.
[17] V. Vapnik. Statistical Learning Theory. John Wiley & Sons, N.Y., 1998.
| 2728 |@word madelon:3 repository:3 briefly:1 eliminating:2 achievable:1 elisseeff:2 concise:1 tr:1 minus:1 carry:1 reduction:1 wrapper:5 electronics:1 score:11 selecting:3 amp:1 past:1 com:4 discretization:1 surprising:1 yet:2 tackling:1 attracted:3 john:2 informative:1 kdd:2 dupont:1 remove:1 intelligence:2 selected:5 nips2000:2 website:1 beginning:2 institution:1 coarse:1 toronto:1 org:2 casp:1 five:9 zhang:5 constructed:2 sustained:1 falsely:1 pairwise:1 secret:1 indeed:1 proliferation:2 inspired:1 provided:5 classifies:1 gisette:2 project:3 mass:1 laptop:1 israel:1 what:1 emerging:1 dror:1 developed:1 finding:2 berkeley:1 tarare:1 tie:2 classifier:18 uk:4 medical:1 positive:2 declare:1 understood:1 service:1 black:1 chose:1 co:1 statistically:2 averaged:1 unsophisticated:1 acknowledgment:1 digit:1 dorothea:4 langley:1 area:4 drug:2 significantly:2 confidence:1 spite:1 get:2 unlabeled:4 selection:31 close:1 arcene:3 risk:2 www:6 demonstrated:1 resembling:1 survey:3 bachrach:2 immediately:2 pure:2 rule:1 handle:1 memorandum:1 olivier:1 us:1 designing:1 origin:1 recognition:2 particularly:1 expensive:1 gunn:4 submission:9 labeled:1 database:1 role:1 thousand:1 schoelkopf:1 edited:1 balanced:3 broken:2 asked:1 trained:1 carrying:1 asa:2 basis:1 chapter:1 various:2 leo:2 effective:1 monte:1 artificial:3 pearson:1 outcome:1 larger:1 supplementary:1 favor:1 statistic:2 itself:1 zhili:4 final:7 advantage:2 took:2 uci:3 combining:2 relevant:2 organizing:1 achieve:1 participating:1 competition:11 achievement:1 ben:1 thankful:1 help:1 illustrate:1 ac:5 nearest:1 ard:2 school:2 received:1 filter:9 deter:1 saffari:3 elimination:2 material:1 creativity:1 correction:1 sufficiently:1 considered:1 ic:1 week:3 driving:1 smallest:1 purpose:1 label:5 grouped:2 create:1 successfully:1 kristin:1 mit:1 challenger:2 gaussian:2 always:1 avoid:1 dexter:3 breiman:2 varying:1 derived:1 competed:1 joachim:1 rank:2 check:1 greatly:1 contrast:1 eliminate:2 compactness:1 issue:2 classification:12 overall:3 among:1 html:3 development:4 raised:1 special:2 extraction:2 washington:2 having:1 sampling:1 preparing:1 represents:1 unnecessarily:1 promote:1 future:1 report:1 richard:1 few:6 national:2 pharmaceutical:1 murphy:1 familiar:1 yielding:1 chain:1 necessary:1 tree:9 incomplete:1 harmful:1 aha:1 minimal:1 advisor:1 modeling:1 caruana:1 southampton:1 entry:14 subset:5 hundred:4 predictor:1 successful:1 too:2 virginia:1 reported:2 considerably:1 combined:3 st:7 density:1 worse:2 book:1 li:3 chemistry:1 summarized:1 includes:1 coefficient:3 inc:1 ranking:11 tuv:4 performed:5 relied:1 participant:18 elaborated:1 ass:1 il:1 publicly:1 accuracy:3 who:2 ensemble:6 yield:1 yes:6 bayesian:6 comparably:1 none:1 carlo:1 researcher:5 published:1 submitted:1 mlearn:1 sharing:1 centering:1 competitor:1 soton:4 dataset:5 popular:1 hur:1 knowledge:1 dimensionality:1 organized:4 sophisticated:1 actually:1 back:1 feed:1 steve:1 improved:1 done:1 box:1 though:1 strongly:1 stage:1 smola:1 until:1 correlation:3 overfit:2 web:2 assessment:2 quality:1 stimulate:2 aviv:1 usa:2 normalized:2 true:1 hence:1 regularization:2 laboratory:1 neal:13 during:4 auc:6 mlrepository:1 criterion:2 pdf:2 percent:1 nikravesh:2 nips2003:2 purposely:1 recently:1 laloudouana:1 common:2 winner:7 yaffo:1 significant:2 isabelle:5 cup:1 imposing:1 enter:1 dft:3 cambridge:1 automatic:2 outlined:1 gratefully:1 had:3 chapelle:1 posterior:1 irrelevant:1 verlag:2 formatted:3 binary:4 mcnemar:1 scoring:5 employed:3 determine:2 period:5 redundant:1 multiple:1 exceeds:1 technical:2 unlabelled:2 academic:1 determination:1 offer:1 cross:1 lin:1 deadline:1 post:1 rlsc:3 parenthesis:2 prediction:7 vision:1 metric:4 kernel:9 sometimes:1 spectrometry:1 nci:1 kohavi:1 meaningless:1 unlike:1 induced:1 december:13 regularly:1 seem:1 call:1 ideal:1 revealed:2 split:1 enough:1 variety:2 gave:2 identified:1 reduce:3 idea:1 whether:1 pca:2 ltd:1 effort:2 f:1 speech:1 york:1 hardly:1 adequate:1 useful:1 generally:1 detailed:1 features2:1 listed:1 ten:2 svms:1 category:3 reduced:2 http:9 outperform:1 percentage:3 andr:1 tutorial:1 track:3 per:1 carnegie:1 hyperparameter:2 group:18 redundancy:2 four:1 threshold:1 blum:1 acknowledged:1 drawn:2 prevent:2 diffusion:4 kept:1 backward:2 fraction:6 year:2 sum:1 powerful:1 fraudulent:1 place:1 guyon:5 eligible:2 yann:1 draw:1 decision:1 zadeh:1 scaling:1 distinguish:1 played:1 fold:1 g:1 software:1 dominated:1 aspect:2 span:1 performing:1 format:3 department:2 mta:1 according:1 icpr:1 combination:3 march:2 dissemination:1 smaller:1 son:1 unity:1 cun:1 formatting:1 deprived:1 presently:1 organizer:2 pr:4 thorsten:1 taken:1 computationally:1 resource:1 remains:1 turn:1 icdar:1 committee:2 mechanism:2 needed:1 know:1 clopinet:5 ffeat:4 end:1 available:8 cheminformatics:1 probe:9 hierarchical:1 original:2 bagging:2 top:9 dirichlet:4 include:4 remaining:1 clustering:1 build:1 prof:1 forum:1 strategy:4 september:1 win:1 separate:1 thank:2 majority:1 discriminant:1 reason:1 laying:1 code:1 ratio:1 providing:1 anxiety:1 optionally:1 fe:4 negative:2 design:8 twenty:1 contributed:2 perform:2 datasets:22 markov:1 benchmark:15 withheld:1 nist:1 immediate:1 situation:1 defining:1 team:3 trec:1 community:2 introduced:2 cheating:2 required:2 nipsfsc:3 lal:2 engine:2 distinction:1 nip:9 address:1 beyond:1 usually:1 pattern:2 gideon:2 challenge:19 rf:7 including:5 built:1 power:2 critical:1 ranked:2 hybrid:1 difficulty:2 participation:1 regularized:1 improve:1 technology:2 started:1 text:2 prior:3 discovery:1 val:1 relative:1 embedded:8 lecture:1 entrant:4 validation:15 foundation:1 verification:1 suspected:1 editor:1 collaboration:1 cancer:1 summary:1 changed:1 surprisingly:2 kremer:2 qualified:1 eastern:1 ber:8 institute:2 neighbor:1 sparse:5 curve:6 dimension:1 genome:1 author:1 made:6 forward:1 preprocessing:4 avoided:1 ec:4 overfitting:2 navot:2 don:1 continuous:1 search:4 table:9 ca:3 tel:1 forest:4 necessarily:2 artificially:2 constructing:1 domain:6 submit:5 masoud:1 did:2 significance:1 dense:5 main:1 reuters:1 noise:1 allowed:2 site:2 referred:1 advice:1 benchmarking:1 roc:5 nips2001:2 transduction:3 wiley:1 theme:2 winning:2 jmlr:2 late:1 dozen:1 down:1 remained:1 load:1 list:1 torkkola:4 svm:3 workshop:7 false:1 vapnik:1 effectively:1 importance:1 ci:2 magnitude:1 te:1 chen:5 explore:2 univariate:1 distracters:2 radford:1 springer:2 ma:1 succeed:1 identity:2 marked:1 sorted:1 bennett:1 adverse:1 included:4 diminished:1 principal:3 called:3 total:3 experimental:1 college:1 support:1 people:2 unbalanced:2 assessed:1 relevance:2 bioinformatics:1 preparation:3 dept:1 mcmc:2 |
1,903 | 2,729 | Mistake Bounds
for Maximum Entropy Discrimination
Philip M. Long
Center for Computational Learning Systems
Columbia University
[email protected]
Xinyu Wu
Department of Computer Science
National University of Singapore
[email protected]
Abstract
We establish a mistake bound for an ensemble method for classification
based on maximizing the entropy of voting weights subject to margin
constraints. The bound is the same as a general bound proved for the
Weighted Majority Algorithm, and similar to bounds for other variants
of Winnow. We prove a more refined bound that leads to a nearly optimal algorithm for learning disjunctions, again, based on the maximum
entropy principle. We describe a simplification of the on-line maximum
entropy method in which, after each iteration, the margin constraints are
replaced with a single linear inequality. The simplified algorithm, which
takes a similar form to Winnow, achieves the same mistake bounds.
1
Introduction
In this paper, we analyze a maximum-entropy procedure for ensemble learning in the online learning model. In this model, learning proceeds in trials. During the tth trial, the
algorithm (1) receives xt ? {0, 1}n (interpreted in this work as a vector of base classifier
predictions), (2) predicts a class y?t ? {0, 1}, and (3) discovers the correct class yt . During
trial t, the algorithm has access only to information from previous trials.
The first algorithm we will analyze for this problem was proposed by Jaakkola, Meila
and Jebara [14]. The algorithm, at each trial t, makes its prediction by taking a weighted
vote over the predictions of the base classifiers. The weight vector pt is the probability
distribution over the n base classifiers that maximizes the entropy, subject to the constraint
that pt correctly classifies all patterns seen in previous trials with a given margin ?. That
is, it maximizes the entropy of pt subject to the constraints that pt ? xs ? 1/2 + ? whenever
ys = 1 for s < t, and pt ? xs ? 1/2 ? ? whenever ys = 0 for s < t.
We show that, if there is a weighting p? , determined with benefit of hindsight, that achieves
n
margin ? on all trials, then this on-line maximum entropy procedure makes at most ln
2? 2
mistakes.
Littlestone [19] proved the same bound for the Weighted Majority Algorithm [21], and a
similar bound for the Balanced Winnow Algorithm [19]. The original Winnow algorithm
was designed to solve the problem of learning a hidden disjunction of a small number
k out of a possible n boolean variables. When this problem is reduced to our general
setting in the most natural way, the resulting bound is ?(k 2 log n), whereas Littlestone
proved a bound of ek ln n for Winnow. We prove more refined bounds for a wider family
of maximum-entropy algorithms, which use thresholds different than 1/2 (as proposed in
[14]) and class-sensitive margins. A mistake bound of ek ln n for learning disjunctions is a
consequence of this more refined analysis.
The optimization needed at each round can be cast as minimizing a convex function subject
to convex constraints, and thus can be solved in polynomial time [25]. However, the same
mistake bounds hold for a similar, albeit linear-time, algorithm. This algorithm, after each
trial, replaces all constraints from previous trials with a single linear inequality. (This is
analogous to modification of SVMs leading to the ROMMA algorithm [18].) The resulting
update is similar in form to Winnow.
Littlestone [19] analyzed some variants of Winnow by showing that mistakes cause a reduction in the relative entropy between the learning algorithm?s weight vector, and that of
the target function. Kivinen and Warmuth [16] showed that an algorithm related to Winnow trades optimally in a sense between accommodating the information from new data,
and keeping the relative entropy between the new and old weight vectors small. Blum [4]
identified a correspondence between Winnow and a different application of the maximum
entropy principle, in which the algorithm seeks to maximize the average entropy of the
conditional distribution over the class designations (the yt ?s) subject to constraints arising
from the examples, as proposed in [2]. Our proofs have a similar structure to the analysis
of ROMMA [18]. Our problems fall within the general framework analyzed by Gordon
[11]; while Gordon?s results expose interesting relationships among learning algorithms,
applying them did not appear to be the most direct route to solving our concrete problem,
nor did they appear likely to result in the most easily understood proofs. As in related analyses like mistake bounds for the perceptron algorithm [22], Winnow [19] and the Weighted
Majority Algorithm [19], our bound holds for any sequence of (xt , yt ) pairs satisfying the
separation condition; in particular no independence assumptions are needed. Langford,
Seeger and Megiddo [17] performed a related analysis, incomparable in strength, using
independence assumptions. Other related papers include [3, 20, 5, 15, 26, 13, 8, 27, 7].
The proofs of our main results do not contain any calculation; they combine simple geometric arguments with established information theory. The proof of the main result proceeds
roughly as follows. If there is a mistake on trial t, it is corrected with a large margin by
pt+1 . Thus pt+1 must assign a significantly different probability to the voters predicting
1 on trial t than pt does. Applying an identity known as Pinsker?s inequality, this means
that the relative entropy from pt+1 and pt is large. Next, we exploit the fact that the constraints satisfied by pt , and therefore by pt+1 , are convex to show that moving from pt to
pt+1 must take you away from the uniform distribution, thus decreasing the entropy. The
theorem then follows from the fact that the entropy can only be reduced by a total of ln n.
The refinement leading to a ek ln n bound for disjunctions arises from the observation that
Pinsker?s inequality can be strengthened when the probabilities being compared are small.
The analysis of this paper lends support to a view of Winnow as a fast, incremental approximation to the maximum entropy discrimination approach, and suggests a variant of
Winnow that corresponds more closely to the inductive bias of maximum entropy.
2
Preliminaries
Let n be the number of base classifiers. To avoid clutter, for the rest of the paper, ?probability distribution? should be understood to mean ?probability distribution over {1, ..., n}.?
2.1
Margins
For u ? [0, 1], define ?(u) = 1 to be 1 if u ? 1/2, and 0 otherwise. For a feature vector
x ? {0, 1}n and a class designation y ? {0, 1}, say that a probability distribution p is
correct with margin ? if ?(p ? x) = y, and |p ? x ? 1/2| ? ?. If x and y were encountered
in a trial of a learning algorithm, we say that p is correct with margin ? on that trial.
2.2
Entropy, relative entropy, and variation
Recall that, for a probability distributions p = (p1 , ..., pn ) and q = (q1 , ..., qn ),
Pn
? the entropy of p, denoted by H(p), is defined by i=1 pi ln(1/pi ),
? the
Pn relative entropy between p and q, denoted by D(p||q), is defined by
i=1 pi ln(pi /qi ), and
? the variation distance between p and q, denoted by V (p, q), is defined to be the
maximum difference between the probabilities that they assign to any set:
n
1X
V (p, q) = max n p ? x ? q ? x =
|pi ? qi |.
2 i=1
x?{0,1}
(1)
Relative entropy and variation distance are related in Pinsker?s inequality.
Lemma 1 ([23]) For all p and q, D(p||q) ? 2V (p, q)2 .
2.3
Information geometry
Relative entropy obeys something like the Pythogarean Theorem.
Lemma 2 ([9]) Suppose q is a probability distribution, C is a convex set of probability
distributions, and r is the element of A that minimizes D(r||q). Then for any p ? C,
D(p||q) ? D(p||r) + D(r||q).
If C can be defined by a system of linear equations, then
D(p||q) = D(p||r) + D(r||q).
3
Maximum Entropy with Margin
In this section, we will analyze the algorithm OME? (?on-line maximum entropy?) that at
the tth trial
? chooses pt to maximize the entropy H(pt ), subject to the constraint that it is
correct with margin ? on all pairs (xs , ys ) seen in the past (with s < t),
? predicts 1 if and only if pt ? xt ? 1/2.
In our analysis, we will assume that there is always a feasible pt .
The following is our main result.
Theorem 3 If there is a fixed probability distribution p? that is correct with margin ? on
n
all trials, OME? makes at most ln
2? 2 mistakes.
Proof: We will show that a mistake causes the entropy of the hypothesis to drop by at least
2? 2 . Since the constraints only become more restrictive, the entropy never increases, and
so the fact that the entropy lies between 0 and ln n will complete the proof.
Suppose trial t was a mistake. The definition of pt+1 ensures that pt+1 ? xt is on the correct
side of 1/2 by at least ?. But pt ? xt was on the wrong side of 1/2. Thus |pt+1 ? xt ? pt ?
xt | ? ?. Either pt+1 ? xt ? pt ? xt ? ?, or the bitwise complement c(xt ) of xt satisfies
pt+1 ? c(xt ) ? pt ? c(xt ) ? ?. Thus V (pt+1 , pt ) ? ?. Therefore, Pinsker?s Inequality
(Lemma 1) implies that
D(pt+1 ||pt ) ? 2? 2 .
(2)
Let Ct be the set of all probability distributions that satisfy the constraints in effect when
pt was chosen, and let u = (1/n, ..., 1/n). Since pt+1 is in Ct (it must satisfy the constraints that pt did), Lemma 2 implies D(pt+1 ||u) ? D(pt+1 ||pt ) + D(pt ||u) and thus
D(pt+1 ||u) ? D(pt ||u) ? D(pt+1 ||pt ) which, since D(p||u) = (ln n) ? H(p) for all p,
implies H(pt )?H(pt+1 ) ? D(pt+1 ||pt ). Applying (2), we get H(pt )?H(pt+1 ) ? 2? 2 .
As described above, this completes the proof.
Because H(pt ) is always at least H(p? ), the same analysis leads to a mistake bound of
(ln n ? H(p? ))/(2? 2 ). Further, a nearly identical proof establishes the following (details
are omitted from this abstract).
Theorem 4 Suppose OME? is modified so that p1 is set to be something other than the
uniform distribution, and each pt minimizes D(pt ||p1 ) subject to the same constraints.
If there is a fixed p? that is correct with margin ? on all trials, the modified algorithm
?
1)
makes at most D(p2?||p
mistakes.
2
4
Maximum Entropy for Learning Disjunctions
In this section, we show how the maximum entropy principle can be used to efficiently
learn disjunctions.
For a threshold b, define ?b (x) to be 1 if x ? b and 0 otherwise. For a feature vector
x ? {0, 1}n and a class designation y ? {0, 1}, say that p is correct at threshold b with
margin ? if ?b (p ? x) = y, and |p ? x ? b| ? ?.
The algorithm OMEb,?+ ,?? analyzed in this section, on the tth trial
? chooses pt to maximize the entropy H(pt ), subject to the constraint that it is
correct at threshold b with margin ?+ on all pairs (xs , ys ) with ys = 1 seen in
the past (with s < t), and correct at threshold b with margin ?? on all such pairs
(xs , ys ) with ys = 0, then
? predicts 1 if and only if pt ? xt ? b.
Note that the algorithm OME? considered in Section 3 can also be called OME1/2,?,? .
For p, q ? [0, 1], define d(p||q) = D((p, (1 ? p))||(q, (1 ? q))), often called ?entropic loss.?
Lemma 5 If there is an x ? {0, 1}n such that p ? x = p and q ? x = q, then D(p||q) ?
d(p||q).
Proof: Application of Lagrange multipliers, together with the fact that D is convex [6],
implies that D(p||q) is minimized, subject to the constraints that p ? x = p and q ? x = q,
when (1) pi is the same for all i with xi = 1, (2) qi is the same for all i with xi = 1,
(3) pi is the same for all i with xi = 0, (4) qi is the same for all i with xi = 0. The
above four properties, together with the constraints, are enough to uniquely specify p and
q. Evaluating D(p||q) in this case gives the result.
Theorem 6 Suppose there is a probability distribution p? that is correct at threshold b,
with a margin ?+ on all trials t with yt = 1, and with margin ?? on all trials with yt = 0.
n
Then OMEb,?+ ,?? makes at most min{d(b+?+ln
||b),d(b??? ||b)} mistakes.
Proof: The outline of the proof is similar to the proof of Theorem 3. We will show that
mistakes cause the entropy of the algorithm?s hypothesis to decrease.
Arguing as in the proof of Theorem 3, H(pt+1 ) ? H(pt ) ? D(pt+1 ||pt ). Lemma 5 then
implies that
H(pt+1 ) ? H(pt ) ? d(pt+1 ? xt ||pt ? xt ).
(3)
If there was a mistake on trial t for which yt = 1, then pt ? xt ? b, and pt+1 ? xt ? b + ?+ .
Thus in this case d(pt+1 ? xt ||pt ? xt ) ? d(b + ?+ ||b). Similarly, if there was a mistake
on trial t for which yt = 0, then d(pt+1 ? xt ||pt ? xt ) ? d(b ? ?? ||b).
Once again, these two bounds on d(pt+1 ? xt ||pt ? xt ), together with (3) and the fact that
the entropy is between 0 and ln n, complete the proof.
The analysis of Theorem 6 can also be used to prove bounds for the case in which mistakes
of different types have different costs, as considered in [12].
Theorem 6 improves on Theorem 3 even in the case in which ?+ = ?? and b = 1/2. For
example, if ? = 1/4, Theorem 6 gives a bound of 7.65 ln n, where Theorem 3 gives an
8 ln n bound.
Next, we apply Theorem 6 to analyze the problem of learning disjunctions.
Corollary 7 If there are k of the n features, such that each yt is the disjunction of those
features in xt , then algorithm OME1/(ek),1/k?1/(ek),1/(ek) makes at most ek ln n mistakes.
Proof Sketch: If the target weight vector p? assigns equal weight to each of the variables
in the disjunction, when y = 1, the weight of variables evaluating to 1 is at least 1/k,
and when y = 0, it is 0. So the hypothesis of Theorem 6 is satisfied when b = 1/(ek),
?+ = 1/k ? b and ?? = b. Plugging into Theorem 6, simplifying and overapproximating
completes the proof.
To get a more readable, but weaker, variant of Theorem 6, we will use the following bound,
implicit in the analysis of Angluin and Valiant [1] (see Theorem 1.1 of [10] for a more
explicit proof, and [24] for a closely related bound). It improves on Pinsker?s inequality
(Lemma 1) when n = 2, p is small, and q is close to p.
Lemma 8 ([1]) If 0 ? p ? 2q, d(p||q) ?
(p?q)2
3q .
The following is a direct consequence of Lemma 8 and Theorem 6. Note that in the case of
disjunctions, it leads to a weaker 6k ln n bound.
Theorem 9 If there is a probability distribution p? that is correct at threshold b with a
margin ? on all trials, then OMEb,?,? makes at most 3b?ln2 n mistakes.
5
Relaxed on-line maximum entropy algorithms
Let us refer the halfspace of probability distributions that satisfy the constraint of trial t
as Tt and the associated separating hyperplane by Jt . Recall that Ct is the set of feasible
Figure 1: In ROME, the constraints Ct in effect before the tth round are replaced by the
halfspace St .
solutions to all the constraints in effect when pt is chosen. So pt+1 maximizes entropy
subject to membership in Ct+1 = Tt ? Ct .
Our proofs only used the following facts about the OME algorithm: (a) pt+1 ? Tt , (b) pt
is the maximum entropy member of Ct , and (c) pt+1 ? Ct .
Suppose At is the set of weight vectors with entropy at last that of pt . Let Ht be the
hyperplane tangent to At at pt . Finally, let St be the halfspace with boundary Ht containing
pt+1 . (See Figure 1.) Then (a), (b) and (c) hold if Ct is replaced with St . (The least obvious
is (b), which follows since Ht is tangent to At at pt , and the entropy function is strictly
concave.)
Also, as previously observed by Littlestone [19], the algorithm might just as well not respond to trials in which there is not a mistake. Let us refer to an algorithm that does both
of these as a Relaxed On-line Maximum Entropy (ROME) algorithm.
A similar observation regarding an on-line SVM algorithm, led to the simple ROMMA
algorithm [18]. In that case, it was possible to obtain a simple close-form expression for
the new weight vector. Matters are only slightly more complicated here.
Proposition 10 If trial t is a mistake, and q maximizes entropy subject to membership in
St ? Tt , then it is on the separating hyperplane for Tt .
Proof: Because q and p both satisfy St , any convex combination of the two satisfies St .
Thus, if q was on the interior of Tt , we could find a probability distribution with higher
entropy that still satisfies both St and Tt by taking a tiny step from q toward p. This would
contradict the assumption that q is the maximum entropy member of St ? Tt .
This implies that the next hypothesis of a ROME algorithm is either on Jt (the separating
hyperplane Tt ) only, or on both Jt and Ht (the separating hyperplane of St ). The following
theorem will enable us to obtain a formula in either case.
Lemma 11 ([9] (Theorem 3.1)) Suppose q is a probability distribution, and C is a set defined by linear constraints as follows: for an m ? n real matrix A, and a m-dimensional
column vector b, C = {r : Ar = b}. Then if r is the member of C minimizing
D(r||q), then
Pm there are scalar constants Z, c1 , ..., cm such that for all i ? {1, ..., n},
ri = exp( j=1 cj aj,i )qi /Z.
If the next hypothesis pt+1 of a ROME algorithm is on Ht , then by Lemma 2, it and all
other members of Ht satisfy D(pt+1 ||u) = D(pt+1 ||pt ) + D(pt ||u). Thus, in this case,
pt+1 also minimizes D(q||pt ) from among the members q of Ht ? Jt . Thus, Lemma 11
implies that pt+1,i /pt,i is the same for all i with xi = 1, and the same for all i with xi = 0.
This implies that, for ROMEb,?+ ,?? , if there was a mistake on a trial t,
? (b+? )p
+
t,i
?
if xt,i = 1 and yt = 1
?
pt ?xt
?
? (1?(b+?
+ ))pt,i
?
if xt,i = 0 and yt = 1
1?(pt ?xt )
pt+1,i =
(4)
(b??? )pt,i
?
if xt,i = 1 and yt = 0
?
pt ?xt
?
?
? (1?(b??+ ))pt,i if x = 0 and y = 0.
1?(pt ?xt )
t,i
t
Note that this updates the weights multiplicatively, like Winnow and Weighted Majority.
If pt+1 is not on the separating hyperplane for St , then it must maximize entropy subject to
membership in Tt alone, and therefore subject to membership in Jt . In this case, Lemma 11
implies
? (b+? )
+
if xt,i = 1 and yt = 1
?
?
|{j:xt,j =1}|
?
?
? (1?(b+?+ )) if x = 0 and y = 1
t,i
t
|{j:xt,j =0}|.
pt+1,i =
(5)
(b??+ )
?
if
x
=
1
and
y
t,i
t =0
?
|{j:xt,j =1}|
?
?
? (1?(b??+ )) if x = 0 and y = 0
t,i
t
|{j:xt,j =0}|.
If this is the case, then pt+1 defined as in (5) should be a member of St .
How to test for membership in St ? Evaluating the gradient of H at pt , and simplifying a
bit, we can see that
(
)
n
X
1
St = q :
qi ln
? H(p) .
pt,i
i=1
Summing up, a way to implement a ROME algorithm with the same mistake bound as the
corresponding OME algorithm is to
? try defining pt+1 as in (5), and check whether the resulting pt+1 ? St , if so use
it, and
? if not, then define pt+1 as in (4) instead.
Acknowledgements
We are grateful to Tony Jebara and Tong Zhang for helpful conversations, and an anonymous referee for suggesting a simplification of the proof of Theorem 3.
References
[1] D. Angluin and L. Valiant. Fast probabilistic algorithms for Hamiltonion circuits and
matchings. Journal of Computer and System Sciences, 18(2):155?193, 1979.
[2] A. L. Berger, S. Della Pietra, and V. J. Della Pietra. A maximum entropy approach to
natural language processing. Computational Linguistics, 22(1):39?71, 1996.
[3] D. Blackwell. An analog of the minimax theorem for vector payoffs. Pac. J. Math.,
6:1?8, 1956.
[4] A. Blum, 2002. http://www-2.cs.cmu.edu/?avrim/ML02/lect0418.txt.
[5] N. Cesa-Bianchi, A. Krogh, and M. Warmuth. Bounds on approximate steepest descent for likelihood maximization in exponential families. IEEE Transactions on
Information Theory, 40(4):1215?1218, 1994.
[6] T. Cover and J. Thomas. Elements of Information Theory. Wiley, 1991.
[7] K. Crammer, O. Dekel, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive
algorithms. NIPS, 2003.
[8] Koby Crammer and Yoram Singer. Ultraconservative online algorithms for multiclass
problems. In COLT, pages 99?115, 2001.
[9] I. Csisz?ar. I-divergence geometry of probability distributions and minimization problems. Annals of Probability, 3:146?158, 1975.
[10] D. P. Dubhashi and A. Panconesi. Concentration of measure for the analysis of randomized algorithms, 1998. Monograph.
[11] Geoffrey J. Gordon. Regret bounds for prediction problems. In Proc. 12th Annu.
Conf. on Comput. Learning Theory, pages 29?40. ACM Press, New York, NY, 1999.
[12] D. P. Helmbold, N. Littlestone, and P. M. Long. On-line learning with linear loss
constraints. Information and Computation, 161(2):140?171, 2000.
[13] M. Herbster and M. K. Warmuth. Tracking the best linear predictor. Journal of
Machine Learning Research, 1:281?309, 2001.
[14] T. Jaakkola, M. Meila, and T. Jebara. Maximum entropy discrimination. NIPS, 1999.
[15] J. Kivinen and M. Warmuth. Boosting as entropy projection. COLT, 1999.
[16] J. Kivinen and M. K. Warmuth. Additive versus exponentiated gradient updates for
linear prediction. Information and Computation, 132(1):1?63, 1997.
[17] J. Langford, M. Seeger, and N. Megiddo. An improved predictive accuracy bound for
averaging classifiers. ICML, pages 290?297, 2001.
[18] Y. Li and P. M. Long. The relaxed online maximum margin algorithm. Machine
Learning, 46(1-3):361?387, 2002.
[19] N. Littlestone. Mistake Bounds and Logarithmic Linear-threshold Learning Algorithms. PhD thesis, UC Santa Cruz, 1989.
[20] N. Littlestone, P. M. Long, and M. K. Warmuth. On-line learning of linear functions.
Computational Complexity, 5:1?23, 1995. Preliminary version in STOC?91.
[21] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information
and Computation, 108:212?261, 1994.
[22] A. B. J. Novikoff. On convergence proofs on perceptrons. In Proceedings of the
Symposium on the Mathematical Theory of Automata, pages 615?622, 1962.
[23] M. S. Pinsker. Information and Information Stability of Random Variables and Processes. Holden-Day, 1964.
[24] F. Topsoe. Some inequalities for information divergence and related measures of
discrimination. IEEE Trans. Inform. Theory, 46(4):1602?1609, 2001.
[25] P. Vaidya. A new algorithm for minimizing convex functions over convex sets. FOCS,
pages 338?343, 1989.
[26] T. Zhang. Regularized winnow methods. NIPS, pages 703?709, 2000.
[27] T. Zhang. A sequential approximation bound for some sample-dependent convex
optimization problems with applications in learning. COLT, pages 65?81, 2001.
| 2729 |@word trial:27 version:1 polynomial:1 dekel:1 seek:1 simplifying:2 q1:1 reduction:1 past:2 bitwise:1 must:4 cruz:1 additive:1 designed:1 drop:1 update:3 discrimination:4 alone:1 warmuth:7 steepest:1 math:1 boosting:1 zhang:3 mathematical:1 direct:2 become:1 symposium:1 focs:1 prove:3 combine:1 roughly:1 p1:3 nor:1 decreasing:1 classifies:1 maximizes:4 circuit:1 cm:1 interpreted:1 minimizes:3 hindsight:1 voting:1 concave:1 megiddo:2 classifier:5 wrong:1 appear:2 before:1 understood:2 mistake:26 consequence:2 might:1 voter:1 suggests:1 obeys:1 arguing:1 regret:1 implement:1 procedure:2 significantly:1 projection:1 get:2 close:2 interior:1 applying:3 www:1 center:1 maximizing:1 yt:12 convex:9 automaton:1 assigns:1 helmbold:1 stability:1 variation:3 analogous:1 annals:1 pt:104 suppose:6 target:2 hypothesis:5 element:2 referee:1 satisfying:1 predicts:3 observed:1 solved:1 ensures:1 trade:1 decrease:1 balanced:1 monograph:1 complexity:1 pinsker:6 grateful:1 solving:1 predictive:1 matchings:1 easily:1 fast:2 describe:1 shalev:1 refined:3 disjunction:10 solve:1 say:3 otherwise:2 online:4 sequence:1 ome:6 csisz:1 convergence:1 incremental:1 wider:1 krogh:1 p2:1 c:2 implies:9 closely:2 correct:12 enable:1 assign:2 preliminary:2 anonymous:1 proposition:1 strictly:1 hold:3 considered:2 exp:1 achieves:2 entropic:1 omitted:1 proc:1 expose:1 topsoe:1 sensitive:1 establishes:1 weighted:6 minimization:1 always:2 modified:2 avoid:1 pn:3 jaakkola:2 corollary:1 check:1 likelihood:1 seeger:2 sense:1 helpful:1 dependent:1 membership:5 holden:1 hidden:1 classification:1 among:2 colt:3 denoted:3 uc:1 equal:1 once:1 never:1 identical:1 koby:1 icml:1 nearly:2 minimized:1 gordon:3 novikoff:1 national:1 divergence:2 pietra:2 replaced:3 geometry:2 analyzed:3 old:1 littlestone:8 column:1 boolean:1 ar:2 cover:1 maximization:1 cost:1 uniform:2 predictor:1 optimally:1 chooses:2 st:14 herbster:1 randomized:1 probabilistic:1 together:3 concrete:1 again:2 thesis:1 satisfied:2 cesa:1 containing:1 conf:1 ek:8 leading:2 li:1 suggesting:1 aggressive:1 matter:1 satisfy:5 performed:1 view:1 try:1 analyze:4 complicated:1 halfspace:3 accuracy:1 efficiently:1 ensemble:2 comp:1 inform:1 whenever:2 definition:1 obvious:1 proof:21 associated:1 vaidya:1 proved:3 recall:2 conversation:1 improves:2 cj:1 higher:1 day:1 specify:1 improved:1 just:1 implicit:1 langford:2 sketch:1 receives:1 aj:1 effect:3 contain:1 multiplier:1 inductive:1 round:2 during:2 uniquely:1 ln2:1 outline:1 complete:2 tt:10 passive:1 discovers:1 analog:1 refer:2 meila:2 pm:1 similarly:1 language:1 moving:1 access:1 base:4 something:2 showed:1 winnow:14 route:1 inequality:8 seen:3 relaxed:3 maximize:4 calculation:1 long:4 y:7 plugging:1 qi:6 prediction:5 variant:4 txt:1 cmu:1 iteration:1 c1:1 whereas:1 completes:2 rest:1 romma:3 subject:13 member:6 enough:1 independence:2 identified:1 incomparable:1 regarding:1 multiclass:1 panconesi:1 whether:1 expression:1 york:1 cause:3 santa:1 clutter:1 svms:1 tth:4 reduced:2 angluin:2 http:1 singapore:1 arising:1 correctly:1 four:1 threshold:8 blum:2 ht:7 you:1 respond:1 family:2 wu:1 separation:1 bit:1 bound:31 ct:9 simplification:2 correspondence:1 replaces:1 encountered:1 strength:1 constraint:21 ri:1 argument:1 min:1 department:1 combination:1 xinyu:1 slightly:1 modification:1 ln:18 equation:1 previously:1 needed:2 singer:2 apply:1 away:1 original:1 thomas:1 include:1 tony:1 linguistics:1 readable:1 exploit:1 yoram:1 restrictive:1 establish:1 dubhashi:1 concentration:1 gradient:2 lends:1 distance:2 separating:5 philip:1 majority:5 accommodating:1 toward:1 relationship:1 multiplicatively:1 berger:1 minimizing:3 stoc:1 bianchi:1 observation:2 descent:1 defining:1 payoff:1 rome:5 jebara:3 complement:1 cast:1 pair:4 blackwell:1 established:1 nu:1 nip:3 trans:1 proceeds:2 pattern:1 max:1 natural:2 regularized:1 predicting:1 kivinen:3 minimax:1 columbia:2 sg:1 geometric:1 tangent:2 acknowledgement:1 relative:7 loss:2 designation:3 interesting:1 geoffrey:1 versus:1 principle:3 tiny:1 pi:7 last:1 keeping:1 bias:1 side:2 weaker:2 perceptron:1 exponentiated:1 fall:1 taking:2 benefit:1 boundary:1 evaluating:3 qn:1 refinement:1 simplified:1 transaction:1 approximate:1 contradict:1 summing:1 xi:6 shwartz:1 ultraconservative:1 learn:1 did:3 main:3 strengthened:1 ny:1 tong:1 wiley:1 explicit:1 plong:1 exponential:1 comput:1 lie:1 weighting:1 theorem:23 formula:1 annu:1 xt:37 jt:5 showing:1 pac:1 x:5 svm:1 albeit:1 avrim:1 valiant:2 sequential:1 phd:1 margin:20 entropy:48 led:1 logarithmic:1 likely:1 lagrange:1 tracking:1 scalar:1 corresponds:1 satisfies:3 acm:1 conditional:1 identity:1 feasible:2 determined:1 corrected:1 hyperplane:6 averaging:1 lemma:13 total:1 called:2 vote:1 perceptrons:1 support:1 arises:1 crammer:2 della:2 |
1,904 | 273 | 218
Bengio, De Mori and Cardin
Speaker Independent Speech Recognition with
Neural Networks and Speech Knowledge
Yoshua Bengio
Renato De Mori
Dept Computer Science Dept Computer Science
McGill University
McGill University
Montreal, Canada H3A2A7
Regis Cardin
Dept Computer Science
McGill University
ABSTRACT
We attempt to combine neural networks with knowledge from
speech science to build a speaker independent speech recognition system. This knowledge is utilized in designing the
preprocessing, input coding, output coding, output supervision
and architectural constraints. To handle the temporal aspect
of speech we combine delays, copies of activations of hidden
and output units at the input level, and Back-Propagation for
Sequences (BPS), a learning algorithm for networks with local
self-loops. This strategy is demonstrated in several experiments, in particular a nasal discrimination task for which the
application of a speech theory hypothesis dramatically improved generalization.
1 INTRODUCTION
The strategy put forward in this research effort is to combine the flexibility
and learning abilities of neural networks with as much knowledge from speech
science as possible in order to build a speaker independent automatic speech
recognition system. This knowledge is utilized in each of the steps in the construction of an automated speech recognition system: preprocessing, input
coding, output coding, output supervision, architectural design. In particular
Speaker Independent Speech Recognition
for preprocessing we explored the advantages of various possible ways of processing the speech signal, such as comparing an ear model VS. Fast Fourier
Transform (FFT) , or compressing the frame sequence in such a way as to
conserve an approximately constant rate of change. To handle the temporal
aspect of speech we propose to combine various algorithms depending of the
demands of the task, including an algorithm for a type of recurrent network
which includes only self-loops and is local in space and time (BPS). This strategy is demonstrated in several experiments, in particular a nasal discrimination task for which the application of a speech theory hypothesis drastically
improved generalization.
2 Application of Speech Knowledge
2.1
Preprocessing
Our previous work has shown us that the choice of preprocessing significantly
influences the performance of a neural network recognizer. (e.g., Bengio &
De Mori 1988) Different types of preprocessing processes and acoustic
features can be utilized at the input of a neural network. We used several
acoustic features (such as counts of zero crossings), filters derived from the
FFT, energy levels (of both the signal and its derivative) and ratios (Gori,
Bengio & De Mori 1989), as well as an ear model and synchrony detector.
Ear model VS. FFT
We performed experiments in speaker-independent recognition of 10 english
vowels on isolated words that compared the use of an ear model with an FFT
as preprocessing. The FFT was done using a mel scale and the same number
of filters (40) as for the ear model. The ear model was derived from the one
proposed by Seneff (1985). Recognition was performed with a neural network
with one hidden layer of 20 units. We obtained 87% recognition with the FFT
preprocessing VS. 96% recognition with the ear model (plus synchrony detector to extract spectral regularity from the instantaneous output of the ear
model) (Bengio, Cosi, De Mori 1989). This was an example of the successful
application of knowledge about human audition to the automatic recognition
of speech with machines.
Compression in time resulting in constant rate of change
The motivation for this processing step is the following. The rate of change of
the speech signal, (as well as the output of networks performing
acoustic~phonetic mappings) varies a lot. It would be nice to have more temporal precision in parts of the signal where there is a lot of variation (bursts,
fast transitions) and less temporal precision in more stable parts of the signal
(e.g., vowels, silence).
Given a sequence of vectors (parameters, which can be acoustic parameters,
such as spectral coefficients, as well as outputs from neural networks) we
transform it by compressing it in time in order to obtain a shorter sequence
where frames refer to segments of varying length of the original sequence.
219
220
Bengio, De Mori and Cardin
Very simple Algorithm that maps sequence X(t)
Yare vectors:
-+
sequence yet) where X and
{ Accumul ate and average X(t), X(t+1) ... X(t+n) in yes) as
long as the sum of the Distance(X(t),X(t+1)) +
+
Distance(X(t+n-1),X(t+n)) is less than a threshold.
When this threshold is reached,
t+-t+n+1;
s+-s+l; }
The advantages of this system are the following: 1) more temporal precision
where needed, 2) reduction of the dimensionality of the problem, 3) constant
rate of change of the resulting signal so that when using input windows in a
neural net, the windows may have less frames, 4) better generalization since
several realizations of the same word spoken at different rates of speech tend
to be reduced to more similar sequences.
Initial results when this system is used to compress spectral parameters (24
mel-scaled FFf filters + energy) computed every 5 ms were interesting. The
task was the classification of phonemes into 14 classes. The size of the database was reduced by 30% ? The size of the window was reduced (4 frames instead of 8), hence the network size was reduced as well. Half the size of the
window was necessary in order to obtain similar performance on the training
set. Generalization on the test set was slightly better (from 38% to 33% classification error by frame). The idea to use a measure of rate of change to
process speech is not new (Atal, 1983) but we believe that it might be particularly useful when the recognition device is a neural network with an input of
several frames of acoustic parameters.
2.2
Input coding
Our previous work has shown us that information should be as easily accessible as possible to the network. For example, compression of the spectral information into cepstrum coefficients (with first few coefficients having very
large variance) resulted in poorer performance with respect to experiments
done with the spectrum itself. The recognition was performed with a neural
network where units compute the sigmoid of the weighted sum of their inputs.
The task was the broad classification of phonemes in 4 classes. The error on
the test set increased from 15% to 20% when using cepstral rather than spectral coefficients.
Another example concerns the recognition experiments for which there is a
lot of variance in the quantities presented in the input. A grid representation
with coarse coding improved learning time as well as generalization (since the
problem became more separable and thus the network needed less hidden units). (Bengio, De Mori, 1988).
2.3
Output coding
We have chosen an output coding scheme based on phonetic features defined
by the way speech is produced. This is generally more difficult to learn but
results in better generalization, especially with respect to new sounds that had
Speaker Independent Speech Recognition
not been seen by the network during the training. We have demonstrated this
with experiments on vowel recognition in which the networks were trained to
recognized the place and the manner of articulation (Bengio, Cosi, De Mori
89). In addition the resulting representation is more compact than when using
one output for each phoneme. However, this representation remains meaningful i.e. each output can be attributed a meaning almost independently of
the values of the other outputs.
In general, an explicit representation is preferred to an arbitrary and compact
one (such as a compact binary coding of the classes). Otherwise, the network
must perform an additional step of encoding. This can be costly in terms of
the size of the networks, and generally also in terms of generalization (given
the need for a larger number of weights).
2.4
Output supervision
When using a network with some recurrences it is not necessary that supervision be provided at every frame for every output (particularly for transition
periods which are difficult to label). Instead the supervision should be provided to the network when the speech signal clearly corresponds to the categories
one is trying to learn. We have used this approach when performing the
discrimination between Ibl and Idl with the BPS (Back Propagation for Sequences) algorithm (self-loop only, c.!. section 3.3).
Giving additional information to the network through more supervision (with
extra output units) improved learning time and generalization (c.!. .section 4).
2.5
Architectural design
Hypothesis about the nature of the processing to be performed by the network
based on speech science knowledge enables to put constraints on the architecture. These constraints result in a network that generalizes better than a fully
connected network. This strategy is most useful when the speech recognition
task has been modularized in the appropriate way so that the same architectural constraints do not have to apply to all of the subtasks. Here are several
examples of application of modularization. We initially explored modularization by acoustic context (different networks are triggered when various acoustic contexts are detected)(Bengio, Cardin, De Mori, Merlo 89) We also implemented modularisation by independent articulatory features (vertical and horizontal place of articulation) (in Bengio, Cosi, De Mori, 89). Another type of
modularization, by subsets of phonemes, was explored by several researchers,
in particular Alex Waibel (Waibel 88).
3 Temporal aspect of the speech recognition task
Both of the algorithms presented in the following subsections assume that one
is lising the Least Mean Square Error criterion, but both can be easily modified for any type of error criterion. We used and sometimes combined the following techniques:
221
222
Bengio, De Mori and Cardin
3.1
Delays
If the speech signal is preprocessed in such a way as to obtain a frame of
acoustic parameters for every interval of time, one can use delays from the input units representing these acoustic parameters to implement an input window on the input sequence, as in NETtalk, or using this strategy at every level
as in TDNNs (Waibel 88). Even when we use a recurrent network, a small
number of delays on the outgoing links of the input units might be useful. It
enables the network to make a direct comparison between successive frames.
3.2
BPS (Back Propagation for Sequences)
This is a learning algorithm that we have introduced for networks that have a
certain constrained type of recurrence (local self-loops). It permits to compute the gradient of the error with respect to all weights. This algorithm has
the same order of space and time requirements as backpropagation for feedforward networks. Experiments with the Ibl vs. Idl speaker independent
discrimination yielded 3.45% error on the test set for the BPS network as opposed to 6.9% error for a feedforward network (Gori, Bengio, De Mori 89).
BPS equations:
feedforward pass:
edynamic units: these have a local self-loop and their input must directly
come from the input layer.
Xi(t+ 1) = Wii Xi(t) +
8Xi(t+ 1)18Wij
==
I;j Wij f(Xj(t?
Wii 8Xi(t)/8Wij + f(Xj(t?
8Xi(t)18Wii == Wii 8Xi(t)18Wii + Xi(t)
for i!=j
for i==j
estatic units, i.e., without feedback, follow usual Back-Propagation (BP) equations (Rumelhart et al. 1986):
Xi(t+ 1) = ~j Wij f(Xj(t?)
8Xi(t+ 1)18Wij == f(Xj(t?
Backpropagation pass, after every frame: as usual but using above definition
of 8Xi(t)18Wii instead of the usual f(Xj(t?.
This algorithm has a time complexity O(L . Nw)(as static BP) It needs space
o (Nu) , where L is the length of a sequence, Nw is the number of weights and
Nu is the number of units. Note that it is local in time (it is causal, no backpropagation in time) and in space (only information coming from direct neighbors is needed).
3.3
Discrete Recurrent Net without Constraints
This is how we compute the gradient in an unconstrained discrete recurrent
net. The derivation is similar to the one of Pearlmutter (1989). It is another
way to view the computation of the gradient for recurrent networks, called
time unfolding, which was presented by (Rumelhart et al. 1986). Here the units have a memory of their past activations during the forward pass (from
Speaker Independent Speech Recognition
frame 1 to L) and a "memory" of the future BEIBXi during the backward pass
(from frame L down to frame 1).
Forward phase: consider the possibility of an arbitrary number of connections
from unit i to unit j, each having a different delay d.
Xi(t) =
~j,d Wijd f(Xi(t-d?) + I(i,t)
Here, the basic idea is to compute BEIBWijd by computing BE/BXi(t):
BE/8Wijd = ~t 8E/8Xi(t) 8Xi(t)/BWijd
where 8Xi(t)18Wijd = f(Xj(t-d? as usual. In the backward phase we backpropagate 8E/8Xi(t) recursively from the last time frame=L down to frame 1:
:Ek,d Wkid 8E/8Xk(t+d) f(Xj(t?)
+(if i is an output unit)(f(Xi(t?)-Yi*(t?) f(Xi(t))
BE/8Xi(t)
=
where Yi*(t) is the target output for unit i at time t. In this equation the first
term represents back propagation from future times and downstream units,
while the second one comes from direct external supervision. This algorithm
works for any connectivity of the recurrent network with delays. Its time complexity is O(L . Nw) (as static BP). However the space requirements are O(L .
Nu). The algorithm is local in space but not in time; however, we found that
restriction not to be very important in speech recognition, where we consider
at most a few hundred frames of left context (one sentence).
4 Nasal experiment
As an example of the application of the above described strategy we have performed the following experiment with the discrimination of nasals Iml and Inl
in a fIXed context. The speech material consisted of 294 tokens from 70 training speakers (male and female with various accents) and 38 tokens from 10
test speakers. The speech signal is preprocessed with an ear model followed
by a generalized synchrony detector yielding 40 spectral parameters every 10
ms. Early experiments with a simple output coding {vowel, ffi, n}, a window
of two consecutive frames as input, and a two-layer fully connected architecture with 10 hidden units gave poor results: 15% error on the test set. A
speech theory hypothesis claiming that the most critical discriminatory information for the nasals is available during the transition between the vowel and
the nasal inspired us to try the following output coding: {vowel, transition to
m, transition to n, nasal}. Since the transition was more important we chose
as input a window of 4 frames at times t, t-10ms, t-3Oms and t-70ms. To
reduce the connectivity the architecture included a constrained first hidden
layer of 40 units where each unit was meant to correspond to one of the 40
spectral frequencies of the preprocessing stage. Each such hidden unit associated with filter bank F was connected (when possible) to input units
corresponding to
frequency banks
(F-2,F-1,F,F+1,F+2)
and times
(t,t-10ms,t-30ms,t-70ms).
223
224
Bengio, De Mori and Cardin
Experiments with this feedforward delay network (160 inputs-40 hidden--10
hidden-4 outputs) showed that, indeed the strongest clues about the identity
of the nasal seemed to be available during the transition and for a very short
time, just before the steady part of the nasal started. In order to extract that
critical information from the stream of outputs of this network, a second network was trained on the outputs of the first one to provide clearly the discrimination of the nasal during the whole of the nasal. That higher level network
used the BPS algorithm to learn about the temporal nature of the task and
keep the detected critical information during the length of the nasal. Recognition performance reached a plateau of 1.14% errors on the training set. Generalization was very good with only 2.63% error on the test set.
5 Future experiments
One of the advantages of using phonetic features instead of phonemes to
describe the speech is that they could help to learn more robustly about the
influence of context. If one uses a phonemic representation and tries to
characterize the influence of the past phoneme on the current phoneme, one
faces the problem of poor statistical sampling of many of the corresponding
diphones (in a realistic database). On the other hand, if speech is characterized by several independent dimensions such as horizontal and vertical place
of articulation and voicing, then the number of possible contexts to consider
for each value of one of the dimensions is much more limited. Hence the set
of examples characterizing those contexts is much richer.
We now present some observations on continuous speech based on our initial
work with the TIMIT database in which we try learning articulatory features.
Although we have obtained good results for the recognition of articulatory
features (horizontal and vertical place of articulation) for isolated words, initial results with continuous speech are less encouraging. Indeed, whereas the
measured place of articulation (by the networks) for phonemes in isolated
speech corresponds well to expectations (as defined by acousticians who physically measured these features for isolated short words), this is not the case
for continuous speech. In the latter case, phonemes have a much shorter
duration so that the articulatory features are most of the time in transition,
and the place of articulation generally does not reach the expected target
values (although it always moves in the right direction ). This is probably due
to the inertia of the production system and to coarticulation effects. In order
to attack that problem we intend to perform the following experiments. We
could use the subset of the database for which the phoneme duration is sufficiently long to learn an approximation of the articulatory features. We could
then improve that approximation in order to be able to learn about the trajectories of these features found in the transitions from one phoneme to the
next. This could be done by using a two stage network (similar to the encoder
network) with a bottleneck in the middle. The first stage of the network produces phonetic features and receives supervision only on the steady parts of
the speech. The second stage of the network (which would be a recurrent network) has as input the trajectory of the approximation of the phonetic
features and produces as output the previous, current and next phoneme. As
an additional constraint, we propose to use self-loops with various time constants on the units of the bottleneck. Units that represent fast varying de scrip-
Speaker Independent Speech Recognition
tors of speech will have a short time constant, while units that we want to
have represent information about the past acoustic context will have a slightly
longer time constant and units that could represent very long time range information - such as information about the speaker or the recording conditions will receive a very long time constant.
This paper has proposed a general strategy for setting up a speaker independent speech recognition system with neural networks using as much speech
knowledge as possible. We explored several aspects of this problem including
preprocessing, input coding, output coding, output supervision, architectural
design, algorithms for recurrent networks, and have described several initial
experimental results to support these ideas.
References
Atal B.S. (1983), Efficient coding of LPC parameters by temporal decomposition, Proc. ICASSP 83 , Boston, pp 81-84.
Bengio Y., Cardin R., De Mori R., Merlo E. (1989) Programmable execution
of multi-layered networks for automatic speech recognition, Communications
of the Association for Computing Machinery, 32 (2).
Bengio Y., Cardin R., De Mori R., (1990), Speaker independent speech
recognition with neural networks and speech knowledge, in D.S. Touretzky
(ed.), Advances in Neural Networks Information Processing Systems 2, San Mateo, CA: Morgan Kaufmann.
Bengio Y., De Mori R., (1988), Speaker normalization and automatic speech
recognition using spectral lines and neural networks, Proc. Canadian Conference on Artificial Intelligence (CSCSI-88) , Edmonton Al., May 88.
Bengio Y., Cosi P., De Mori R., (1989), On the generalization capability of
multi-layered networks in the extraction of speech properties, Proc. Internation loint Conference of Artificial Intelligence (IICAI89)" , Detroit, August 89,
pp. 1531-1536.
Gori M., Bengio Y., De Mori R., (1989), BPS: a learning algorithm for capturing the dynamic nature of speech, Proc. IEEE International loint Conference on Neural Networks, Washington, June 89.
Pearlmutter B.A., Learning state space trajectories in recurrent neural networks, (1989), Neural Computation, vol. 1, no. 2, pp. 263-269.
Rumelhart D.E., Hinton G., Williams R.J., (1986), Learning internal
representation by error propagation, in Parallel Distributed Processing, exploration in the microstructure of cognition, vol. 1, MIT Press 1986.
Seneff S., (1985), Pitch and spectral analysis of speech based on an auditory
synchrony model, RLE Technical report 504, MIT.
Waibel A., (1988), Modularity in neural networks for speech recognition, Advances in Neural Networks Information Processing Systems 1. San Mateo, CA:
Morgan Kaufmann.
225
| 273 |@word middle:1 compression:2 decomposition:1 idl:2 recursively:1 reduction:1 initial:4 past:3 current:2 comparing:1 activation:2 yet:1 must:2 realistic:1 enables:2 discrimination:6 v:4 half:1 intelligence:2 device:1 xk:1 short:3 coarse:1 successive:1 attack:1 burst:1 direct:3 combine:4 manner:1 expected:1 indeed:2 multi:2 inspired:1 encouraging:1 window:7 provided:2 spoken:1 temporal:8 every:7 scaled:1 unit:25 before:1 local:6 encoding:1 approximately:1 might:2 plus:1 chose:1 mateo:2 limited:1 discriminatory:1 range:1 implement:1 backpropagation:3 significantly:1 word:4 layered:2 put:2 context:8 influence:3 restriction:1 map:1 demonstrated:3 williams:1 independently:1 duration:2 handle:2 variation:1 mcgill:3 construction:1 target:2 us:1 designing:1 hypothesis:4 crossing:1 rumelhart:3 recognition:27 conserve:1 utilized:3 particularly:2 database:4 compressing:2 connected:3 complexity:2 dynamic:1 trained:2 segment:1 easily:2 icassp:1 various:5 derivation:1 fast:3 describe:1 detected:2 artificial:2 cardin:8 richer:1 larger:1 otherwise:1 encoder:1 ability:1 transform:2 itself:1 sequence:12 advantage:3 triggered:1 net:3 propose:2 coming:1 loop:6 realization:1 flexibility:1 regularity:1 requirement:2 produce:2 help:1 depending:1 recurrent:9 montreal:1 measured:2 phonemic:1 implemented:1 come:2 direction:1 coarticulation:1 filter:4 exploration:1 human:1 fff:1 material:1 microstructure:1 generalization:10 cscsi:1 sufficiently:1 mapping:1 nw:3 cognition:1 tor:1 early:1 consecutive:1 recognizer:1 proc:4 label:1 detroit:1 weighted:1 unfolding:1 mit:2 clearly:2 always:1 modified:1 rather:1 varying:2 derived:2 june:1 regis:1 ibl:2 initially:1 hidden:8 wij:5 classification:3 constrained:2 having:2 extraction:1 sampling:1 washington:1 represents:1 broad:1 future:3 yoshua:1 report:1 few:2 resulted:1 phase:2 vowel:6 attempt:1 possibility:1 male:1 yielding:1 articulatory:5 poorer:1 necessary:2 shorter:2 machinery:1 causal:1 isolated:4 increased:1 subset:2 hundred:1 delay:7 successful:1 characterize:1 varies:1 combined:1 international:1 accessible:1 connectivity:2 ear:9 opposed:1 external:1 audition:1 derivative:1 ek:1 de:19 coding:14 includes:1 coefficient:4 stream:1 performed:5 view:1 lot:3 try:3 reached:2 capability:1 parallel:1 synchrony:4 timit:1 square:1 became:1 phoneme:12 variance:2 iml:1 who:1 correspond:1 kaufmann:2 yes:1 produced:1 trajectory:3 researcher:1 detector:3 strongest:1 plateau:1 reach:1 touretzky:1 ed:1 definition:1 energy:2 frequency:2 pp:3 associated:1 attributed:1 static:2 auditory:1 knowledge:10 subsection:1 dimensionality:1 back:5 higher:1 follow:1 improved:4 cepstrum:1 done:3 cosi:4 just:1 stage:4 hand:1 receives:1 horizontal:3 propagation:6 accent:1 believe:1 effect:1 consisted:1 hence:2 nettalk:1 during:7 self:6 recurrence:2 speaker:15 mel:2 steady:2 m:7 criterion:2 trying:1 generalized:1 pearlmutter:2 meaning:1 instantaneous:1 sigmoid:1 bxi:1 association:1 refer:1 automatic:4 unconstrained:1 grid:1 had:1 stable:1 supervision:9 longer:1 showed:1 female:1 phonetic:5 certain:1 binary:1 seneff:2 yi:2 seen:1 morgan:2 additional:3 recognized:1 period:1 signal:9 sound:1 technical:1 characterized:1 long:4 pitch:1 basic:1 expectation:1 physically:1 sometimes:1 represent:3 normalization:1 tdnns:1 receive:1 addition:1 whereas:1 want:1 interval:1 extra:1 probably:1 recording:1 tend:1 feedforward:4 bengio:18 h3a2a7:1 canadian:1 automated:1 fft:6 xj:7 gave:1 architecture:3 reduce:1 idea:3 loint:2 bottleneck:2 effort:1 speech:49 programmable:1 dramatically:1 useful:3 generally:3 nasal:12 category:1 reduced:4 discrete:2 vol:2 threshold:2 preprocessed:2 backward:2 downstream:1 sum:2 place:6 almost:1 architectural:5 capturing:1 renato:1 layer:4 followed:1 yielded:1 constraint:6 alex:1 bp:3 aspect:4 fourier:1 performing:2 separable:1 waibel:4 poor:2 ate:1 slightly:2 mori:18 equation:3 remains:1 count:1 ffi:1 needed:3 generalizes:1 wii:6 available:2 permit:1 yare:1 apply:1 spectral:9 appropriate:1 voicing:1 robustly:1 modularization:3 original:1 compress:1 gori:3 giving:1 build:2 especially:1 move:1 intend:1 quantity:1 strategy:7 costly:1 usual:4 gradient:3 distance:2 link:1 length:3 ratio:1 difficult:2 claiming:1 design:3 perform:2 vertical:3 observation:1 hinton:1 communication:1 frame:18 arbitrary:2 august:1 canada:1 subtasks:1 introduced:1 connection:1 sentence:1 acoustic:10 nu:3 able:1 articulation:6 lpc:1 including:2 memory:2 critical:3 representing:1 scheme:1 improve:1 started:1 extract:2 nice:1 merlo:2 fully:2 interesting:1 oms:1 bank:2 production:1 token:2 last:1 copy:1 english:1 drastically:1 silence:1 neighbor:1 face:1 cepstral:1 characterizing:1 modularized:1 distributed:1 feedback:1 dimension:2 transition:9 seemed:1 forward:3 inertia:1 clue:1 preprocessing:10 san:2 compact:3 preferred:1 keep:1 xi:19 spectrum:1 continuous:3 modularity:1 learn:6 nature:3 ca:2 motivation:1 whole:1 edmonton:1 precision:3 explicit:1 atal:2 down:2 explored:4 concern:1 execution:1 demand:1 boston:1 backpropagate:1 inl:1 corresponds:2 identity:1 internation:1 change:5 rle:1 included:1 called:1 pas:4 experimental:1 meaningful:1 bps:8 internal:1 support:1 latter:1 meant:1 dept:3 outgoing:1 |
1,905 | 2,730 | Experts in a Markov Decision Process
Eyal Even-Dar
Computer Science
Tel-Aviv University
[email protected]
Sham M. Kakade
Computer and Information Science
University of Pennsylvania
[email protected]
Yishay Mansour ?
Computer Science
Tel-Aviv University
[email protected]
Abstract
We consider an MDP setting in which the reward function is allowed to
change during each time step of play (possibly in an adversarial manner),
yet the dynamics remain fixed. Similar to the experts setting, we address
the question of how well can an agent do when compared to the reward
achieved under the best stationary policy over time. We provide efficient
algorithms, which have regret bounds with no dependence on the size of
state space. Instead, these bounds depend only on a certain horizon time
of the process and logarithmically on the number of actions. We also
show that in the case that the dynamics change over time, the problem
becomes computationally hard.
1
Introduction
There is an inherent tension between the objectives in an expert setting and those in a reinforcement learning setting. In the experts problem, during every round a learner chooses
one of n decision making experts and incurs the loss of the chosen expert. The setting is
typically an adversarial one, where Nature provides the examples to a learner. The standard objective here is a myopic, backwards looking one ? in retrospect, we desire that our
performance is not much worse than had we chosen any single expert on the sequence of
examples provided by Nature. In contrast, a reinforcement learning setting typically makes
the much stronger assumption of a fixed environment, typically a Markov decision process (MDP), and the forward looking objective is to maximize some measure of the future
reward with respect to this fixed environment.
The motivation of this work is to understand how to efficiently incorporate the benefits of
existing experts algorithms into a more adversarial reinforcement learning setting, where
certain aspects of the environment could change over time. A naive way to implement an
experts algorithm is to simply associate an expert with each fixed policy. The running time
of such algorithms is polynomial in the number of experts and the regret (the difference
from the optimal reward) is logarithmic in the number of experts. For our setting the number of policies is huge, namely #actions#states , which renders the naive experts approach
computationally infeasible.
Furthermore, straightforward applications of standard regret algorithms produce regret
bounds which are logarithmic in the number of policies, so they have linear dependence
?
This work was supported in part by the IST Programme of the European Community, under the
PASCAL Network of Excellence, IST-2002-506778, by a grant from the Israel Science Foundation
and an IBM faculty award. This publication only reflects the authors? views.
on the number of states. We might hope for a more effective regret bound which has no
dependence on the size of state space (which is typically large).
The setting we consider is one in which the dynamics of the environment are known to
the learner, but the reward function can change over time. We assume that after each time
step the learner has complete knowledge of the previous reward functions (over the entire
environment), but does not know the future reward functions.
As a motivating example one can consider taking a long road-trip over some period of
time T . The dynamics, namely the roads, are fixed, but the road conditions may change
frequently. By listening to the radio, one can get (effectively) instant updates of the road
and traffic conditions. Here, the task is to minimize the cost during the period of time T .
Note that at each time step we select one road segment, suffer a certain delay, and need to
plan ahead with respect to our current position.
This example is similar to an adversarial shortest path problem considered in Kalai and
Vempala [2003]. In fact Kalai and Vempala [2003], address the computational difficulty of
handling a large number of experts under certain linear assumptions on the reward functions. However, their algorithm is not directly applicable to our setting, due to the fact that
in our setting, decisions must be made with respect to the current state of the agent (and
the reward could be changing frequently), while in their setting the decisions are only made
with respect to a single state.
McMahan et al. [2003] also considered a similar setting ? they also assume that the reward
function is chosen by an adversary and that the dynamics are fixed. However, they assume
that the cost functions come from a finite set (but are not observable) and the goal is to find
a min-max solution for the related stochastic game.
In this work, we provide efficient ways to incorporate existing best experts algorithms into
the MDP setting. Furthermore, our loss bounds (compared to the best constant policy) have
no dependence on the number of states and depend only on on a certain horizon time of
the environment and log(#actions). There are two sensible extensions of our setting. The
first is where we allow Nature to change the dynamics of the environment over time. Here,
we show that it becomes NP-Hard to develop a low regret algorithm even for oblivious
adversary. The second extension is to consider one in which the agent only observes the
rewards for the states it actually visits (a generalization of the multi-arm bandits problem).
We leave this interesting direction for future work.
2
The Setting
We consider an MDP with state space S; initial state distribution d1 over S; action space
A; state transition probabilities {Psa (?)} (here, Psa is the next-state distribution on taking action a in state s); and a sequence of reward functions r1 , r2 , . . . rT , where rt is the
(bounded) reward function at time step t mapping S ? A into [0, 1].
The goal is to maximize the sum of undiscounted rewards over a T step horizon. We assume
the agent has complete knowledge of the transition model P , but at time t, the agent only
knows the past reward functions r1 , r2 , . . . rt?1 . Hence, an algorithm A is a mapping from
S and the previous reward functions r1 , . . . rt?1 to a probability distribution over actions,
so A(a|s, r1 , . . . rt?1 ) is the probability of taking action a at time t.
We define the return of an algorithm A as:
" T
#
X
1
rt (st , at )d1 , A
Vr1 ,r2 ,...rT (A) = E
T
t=1
where at ? A(a|st , r1 , . . . rt?1 ) and st is the random variable which represents the state
at time t, starting from initial state s1 ? d1 and following actions a1 , a2 , . . . at?1 . Note
that we keep track of the expectation and not of a specific trajectory (and our algorithm
specifies a distribution over actions at every state and at every time step t).
Ideally, we would like to find an A which achieves a large reward Vr1 ,...rT (A) regardless of
how the adversary chooses the reward functions. In general, this of course is not possible,
and, as in the standard experts setting, we desire that our algorithm competes favorably
against the best fixed stationary policy ?(a|s) in hindsight.
3
An MDP Experts Algorithm
3.1
Preliminaries
Before we provide our algorithm a few definitions are in order. For every stationary policy ?(a|s), we define P ? to be the transition matrix induced by ?, where the component
[P ? ]s,s? is the transition probability from s to s? under ?. Also, define d?,t to be the state
distribution at time t when following ?, ie
d?,t = d1 (P ? )t
where we are treating d1 as a row vector here.
Assumption 1 (Mixing) We assume the transition model over states, as determined by ?,
has a well defined stationary distribution, which we call d? . More formally, for every
initial state s, d?,t converges to d? as t tends to infinity and d? P ? = d? . Furthermore, this
implies there exists some ? such that for all policies ?, and distributions d and d? ,
kdP ? ? d? P ? k1 ? e?1/? kd ? d? k1
where kxk1 denotes the l1 norm of a vector x. We refer to ? as the mixing time and assume
that ? > 1.
The parameter ? provides a bound on the planning horizon timescale, since it implies that
every policy achieves close to its average reward in O(? ) steps 1 . This parameter also
governs how long it effectively takes to switch from one policy to another (after time O(? )
steps there is little information in the state distribution about the previous policy).
This assumption allows us to define the average reward of policy ? in an MDP with reward
function r as:
?r (?) = Es?d? ,a??(a|s) [r(s, a)]
and the value, Q?,r (s, a), is defined as
"?
#
X
Q?,r (s, a) ? E
(r(st , at ) ? ?r (?)) s1 = s, a1 = a, ?
t=1
where and st and at are the state and actions at time t, after starting from state s1 = s
then deviating with an immediate action of a1 = a and following ? onwards. We slightly
abuse notation by writing Q?,r (s, ? ? ) = Ea??? (a|s) [Q?,r (s, a)]. These values satisfy the
well known recurrence equation:
?
Q?,r (s, a) = r(s, a) ? ?r (?) + Es? ?Psa [Q? (s? , ?)]
(1)
where Q? (s , ?) is the next state value (without deviation).
1
If this timescale is unreasonably large for some specific MDP, then one could artificially impose
some horizon time and attempt to compete with those policies which mix in this horizon time, as
done Kearns and Singh [1998].
If ? ? is an optimal policy (with respect to ?r ), then, as usual, we define Q?r (s, a) to be the
value of the optimal policy, ie Q?r (s, a) = Q?? ,r (s, a).
We now provide two useful lemmas. It is straightforward to see that the previous assumption implies a rate of convergence to the stationary distribution that is O(? ), for all policies.
The following lemma states this more precisely.
Lemma 2 For all policies ?,
kd?,t ? d? k1 ? 2e?t/? .
Proof. Since ? is stationary, we have d? P ? = d? , and so
kd?,t ? d? k1 = kd?,t?1 P ? ? d? P ? k1 ? kd?,t?1 ? d? k1 e?1/?
which implies kd?,t ? d? k1 ? kd1 ? d? k1 e?t/? . The claim now follows since, for all
distributions d and d? , kd ? d? k1 ? 2.
The following derives a bound on the Q values as a function of the mixing time.
Lemma 3 For all reward functions r, Q?,r (s, a) ? 3? .
Proof. First, let us bound Q?,r (s, ?), where ? is used on the first step. For all t, including
t = 1, let d?,s,t be the state distribution at time t starting from state s and following ?.
Hence, we have
?
X
Q?,r (s, ?) =
Es? ?d?,s,t ,a?? [r(s? , a)] ? ?r (?))
t=1
?
=
?
X
t=1
?
X
t=1
Es? ?d? ,a?? [r(s? , a)] ? ?r (?) + 2e?t/?
2e?t/? ?
Z
?
2e?t/? = 2?
0
Using the recurrence relation for the values, we know Q?,r (s, a) could be at most 1 more
than the above. The result follows since 1 + 2? ? 3?
3.2
The Algorithm
Now we provide our main result showing how to use any generic experts algorithm in our
setting. We associate each state with an experts algorithm, and the expert for each state
is responsible for choosing the actions at that state. The immediate question is what loss
function should we feed to each expert. It turns out Q?t ,rt is appropriate. We now assume
that our experts algorithm achieves a performance comparable to the best constant action.
Assumption 4 (Black Box Experts) We assume access to an optimized best expert algorithm which guarantees that for any sequence of loss functions c1 , c2 , . . . cT over actions
A, the algorithm selects a distribution qt over A (using only the previous loss functions
c1 , c2 , . . . ct?1 ) such that
T
X
t=1
Ea?qt [ct (a)] ?
T
X
t=1
ct (a) + M
p
T log |A|,
where kct (a)k ? M . Furthermore, we also assume that decision distributions do not
change quickly:
r
log |A|
kqt ? qt+1 k1 ?
t
These assumptions are satisfied by the multiplicative weights algorithms. For instance, the
algorithm in Freund and Schapire
[1999] is such that the for each decision a, | log qt (a) ?
q
log qt+1 (a)| changes by O(
log |A|
),
t
which implies the weaker l1 condition above.
In our setting, we have an experts algorithm associated with every state s, which is fed the
loss function Q?t ,rt (s, ?) at time t. The above assumption then guarantees that at every
state s for every action a we have that
T
X
t=1
Q?t ,rt (s, ?t ) ?
T
X
Q?t ,rt (s, a) + 3?
t=1
p
T log |A|
since the loss function Q?t ,rt is bounded by 3? , and that
r
log |A|
|?t (?|s) ? ?t+1 (?|s)|1 ?
t
As we shall see, it is important that this ?slow change? condition be satisfied. Intuitively,
our experts algorithms will be using a similar policy for significantly long periods of time.
Also note that since the experts algorithms are associated with each state and each of the
N experts chooses decisions out of A actions, the algorithm is efficient (polynomial in N
and A, assuming that that the black box uses a reasonable experts algorithm).
We now state our main theorem.
Theorem 5 Let A be the MDP experts algorithm. Then for all reward functions
r1 , r2 , . . . rT and for all stationary policies ?,
r
r
log |A|
log |A| 4?
2
Vr1 ,r2 ,...rT (A) ? Vr1 ,r2 ,...rT (?) ? 8?
? 3?
?
T
T
T
?
As expected, the regret goes to 0 at the rate O(1/ T ), as is the case with experts algorithms. Importantly, note that the bound does not depend on the size of the state space.
3.3
The Analysis
The analysis is naturally divided into two parts. First, we analyze the performance of the
algorithm in an idealized setting, where the algorithm instantaneously obtains the average
reward of its current policy at each step. Then we take into account the slow change of the
policies to show that the actual performance is similar to the instantaneous performance.
An Idealized Setting: Let us examine the case in which at each time t, when the algorithm uses ?t , it immediately obtains reward ?rt (?t ). The following theorem compares the
performance of our algorithms to that of a fixed constant policy in this setting.
Theorem 6 For all sequences r1 , r2 , . . . rT , the MDP experts algorithm have the following
performance bound. For all ?,
T
X
t=1
?rt (?t ) ?
T
X
t=1
?rt (?) ? 3?
p
T log |A|
where ?1 , ?2 , . . . ?T is the sequence of policies generated by A in response to r1 , r2 , . . . rT .
Next we provide a technical lemma, which is a variant of a result in Kakade [2003]
Lemma 7 For all policies ? and ? ? ,
?r (? ? ) ? ?r (?) = Es?d?? [Q?,r (s, ? ? ) ? Q?,r (s, ?)]
Proof. Note that by definition of stationarity, if the state distribution is at d?? , then the
next state distribution is also d?? if ? ? is followed. More formally, if s ? d?? , a ? ? ? (a|s),
and s? ? Psa , then s? ? d?? . Using this and equation 1, we have:
Es?d?? [Q?,r (s, ? ? )] = Es?d?? ,a??? [Q?,r (s, a)]
= Es?d?? ,a??? [r(s, a) ? ?r (?) + Es? ?Psa [Q? (s? , ?)]
= Es?d?? ,a??? [r(s, a) ? ?r (?)] + Es?d?? [Q? (s, ?)]
= ?r (? ? ) ? ?r (?) + Es?d?? [Q? (s, ?)]
Rearranging terms leads to the result.
The lemma shows why our choice to feed each experts algorithm Q?t ,rt was appropriate.
Now we complete the proof of the above theorem.
Proof. Using the assumed regret in assumption 4,
T
X
t=1
?rt (?) ?
T
X
?rt (?t )
=
t=1
T
X
t=1
=
Es?d? [Q?t ,rt (s, ?) ? Q?t ,rt (s, ?t )]
T
X
Es?d? [ Q?t ,rt (s, ?) ? Q?t ,rt (s, ?t )]
t=1
p
? Es?d? [3? T log A]
p
= 3? T log A
where we used the fact that d? does not depend on the time in the second step.
Taking Mixing Into Account: This subsection relates the values V to the sums of average
reward used in the idealized setting.
Theorem 8 For all sequences r1 , r2 , . . . rT and for all A
r
T
log |A| 2?
1X
2
?r (?t )| ? 4?
+
|Vr1 ,r2 ,...rT (A) ?
T t=1 t
T
T
where ?1 , ?2 , . . . ?T is the sequence of policies generated by A in response to r1 , r2 , . . . rT .
Since the above holds for all A (including those A which are the constant policy ?), then
combining this with Theorem 6 (once with A and once with ?) completes the proof of
Theorem 5. We now prove the above.
The following simple lemma is useful and we omit the proof. It shows how close are the
next state distributions when following ?t rather than ?t+1 .
Lemma 9 Let ? and ? ? be such that k?(?|s)?? ? (?|s)k1 ? ?. Then for any state distribution
?
d, we have kdP ? ? dP ? k1 ? ?.
Analogous to the definition of d?,t , we define dA,t
dA,t = Pr[st = s|d1 , A]
which is the probability that the state at time t is s given that A has been followed.
Lemma 10 Let ?1 , ?2 , . . . ?T be the sequence of policies generated by A in response to
r1 , r2 , . . . rT . We have
r
log |A|
2
kdA,t ? d?t k1 ? 2?
+ 2e?t/?
t
Proof. Let k ? t. Using our experts assumption, it is straightforward
p to see that that the
change in the policy over k steps is |?k (?|s) ? ?t (?|s)|1 ? (t ? k) log |A|/t. Using this
with dA,k = dA,k?1 P (?k ) and d?t P ?t = d?t , we have
kdA,k ? d?t k1
= kdA,k?1 P ?k ? d?t k1
? kdA,k?1 P ?t ? d?t k1 + kdA,k?1 P ?k ? dA,k?1 P ?t k1
p
? kdA,k?1 P ?t ? d?t P ?t k1 + 2(t ? k) log |A|/t
p
? e?1/? kdA,k?1 ? d?t k1 + 2(t ? k) log |A|/t
where we have used the last lemma in the third step and our contraction assumption 1 in
the second to last step. Recursing on the above equation leads to:
kdA,t ? d?t k
2
X
p
? 2 log |A|/t
(t ? k)e?(t?k)/? + e?t/? kd1 ? d?t k
k=t
?
X
p
ke?k/? + 2e?t/?
? 2 log |A|/t
k=1
The sum is bounded by an integral from 0 to ?, which evaluates to ? 2 .
We are now ready to complete the proof of Theorem 8.
Proof. By definition of V ,
Vr1 ,r2 ,...rT (A)
=
?
?
?
T
1X
Es?dA,t ,a??t [rt (s, a)]
T t=1
T
T
1X
1X
Es?d?t ,a??t [rt (s, a)] +
kdA,t ? d?t k1
T t=1
T t=1
!
r
T
T
1X
log |A|
1X
2
?t/?
?r (?t ) +
2?
+ 2e
T t=1 t
T t=1
t
r
T
1X
log |A| 2?
2
?r (?t ) + 4?
+
T t=1 t
T
T
where we have bounded the sums by integration in the second to last step. A symmetric
argument leads to the result.
4
A More Adversarial Setting
In this section we explore a different setting, the changing dynamics model. Here, in each
timestep t, an oblivious adversary is allowed to choose both the reward function rt and
the transition model Pt ? the model that determines the transitions to be used at timestep
t. After each timestep, the agent receives complete knowledge of both rt and Pt . Furthermore, we assume that Pt is deterministic, so we do not concern ourselves with mixing
issues. In this setting, we have the following hardness result. We let Rt? (M ) be the optimal
average reward obtained by a stationary policy for times [1, t].
Theorem 11 In the changing dynamics model, if there exists a polynomial time online
algorithm (polynomial in the problem parameters) such that, for any MDP, has an expected
average reward larger than (0.875 + ?)Rt? (M ), for some ? > 0 and t, then P = N P .
The following lemma is useful in the proof and uses the fact that it is hard to approximate
MAX3SAT within any factor better than 0.875 (Hastad [2001]).
Lemma 12 Computing a stationary policy in the changing dynamics model with average
reward larger than (0.875 + ?)R? (M ), for some ? > 0, is NP-Hard.
Proof: We prove it by reduction from 3-SAT. Suppose that the 3-SAT formula, ? has m
clauses, C1, . . . , Cm , and n literals, x1 , . . . , xn then we reduce it to MDP with n + 1
states,s1 , . . . sn , sn+1 , two actions in each state, 0, 1 and fixed dynamic for 3m steps which
will be described later. We prove that a policy with average reward p/3 translates to an
assignment that satisfies p fraction of ? and vice versa. Next we describe the dynamics.
Suppose that C1 is (x1 ? ?x2 ? x7 ) and C2 is (x4 ? ?x1 ? x7 ). The initial state is s1 and the
reward for action 0 is 0 and the agent moves to state s2 , for action 1 the reward is 1 and it
moves to state sn+1 . In the second timestep the reward in sn+1 is 0 for every action and the
agents stay in it; in state s2 if the agent performs action 0 then it obtains reward 1 and move
to state sn+1 otherwise it obtains reward 0 and moves to state s7 . In the next timestep the
reward in sn+1 is 0 for every action and the agents moves to x4 , the reward in s7 is 1 for
action 1 and zero for action 0 and moves to s4 for both actions. The rest of the construction
is done identically. Note that time interval [3(? ? 1) + 1, 3?] corresponds to C? and that the
reward obtained in this interval is at most 1. We note that ? has an assignment y1 , . . . , yn
where yi = {0, 1} that satisfies p fraction of it, if and only if ? which takes action yi in si
has average reward p/3. We prove it by looking on each interval separately and noting that
if a reward 1 is obtained then there is an action a that we take in one of the states which has
reward 1 but this action corresponds to a satisfying assignment for this clause.
We are now ready to prove Theorem 11.
Proof: In this proof we make few changes from the construction given in Lemma 12. We
allow the same clause to repeat few times, and its dynamics are described in n steps and
not in 3 steps, where in the k step we move from sk to sk+1 and obtains 0 reward, unless
the action ?satisfies? the chosen clause, if it satisfies then we obtain an immediate reward
1, move to sn+1 and stay there for n ? k ? 1 steps. After n steps the adversary chooses
uniformly at random the next clause. In the analysis we define the n steps related to a clause
as an iteration. The strategy defined by the algorithm at the k iteration is the probability
assigned to action 0/1 at state s? just before arriving to s? . Note that the strategy at each
iteration is actually a stationary policy for M . Thus the strategy in each iteration defines
an assignment for the formula. We also note that before an iteration the expected reward
of the optimal stationary policy in the iteration is k/(nm), where k is the maximal number
of satisfiable clauses and there are m clauses, and we have E[R? (M )] = k/(nm). If we
choose at random an iteration, then the strategy defined in that iteration has an expected
reward which is larger than (0.875 + ?)R? (M ), which implies that we can satisfy more
than 0.875 fraction of satisfiable clauses, but this is impossible unless P = N P .
References
Y. Freund and R. Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29:79?103, 1999.
J. Hastad. Some optimal inapproximability results. J. ACM, 48(4):798?859, 2001.
S. Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College
London, 2003.
A. Kalai and S. Vempala. Efficient algorithms for on-line optimization. Proceedings of COLT, 2003.
M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. Proceedings of
ICML, 1998.
H. McMahan, G. Gordon, and A. Blum. Planning in the presence of cost functions controlled by an
adversary. In In the 20th ICML, 2003.
| 2730 |@word faculty:1 polynomial:5 stronger:1 norm:1 contraction:1 incurs:1 reduction:1 initial:4 past:1 existing:2 current:3 si:1 yet:1 must:1 treating:1 update:1 stationary:11 provides:2 c2:3 prove:5 manner:1 excellence:1 upenn:1 hardness:1 expected:4 behavior:1 frequently:2 planning:2 multi:1 examine:1 little:1 actual:1 becomes:2 provided:1 bounded:4 competes:1 notation:1 israel:1 what:1 cm:1 hindsight:1 guarantee:2 every:11 grant:1 omit:1 yn:1 before:3 tends:1 path:1 abuse:1 might:1 black:2 responsible:1 regret:8 implement:1 significantly:1 road:5 get:1 close:2 impossible:1 writing:1 deterministic:1 straightforward:3 regardless:1 starting:3 go:1 ke:1 immediately:1 importantly:1 analogous:1 yishay:1 play:1 pt:3 suppose:2 construction:2 us:3 associate:2 logarithmically:1 satisfying:1 kxk1:1 observes:1 environment:7 complexity:1 reward:46 ideally:1 dynamic:12 depend:4 singh:2 segment:1 learner:4 vr1:6 effective:1 describe:1 london:1 choosing:1 larger:3 otherwise:1 timescale:2 online:1 sequence:8 skakade:1 maximal:1 combining:1 mixing:5 convergence:1 undiscounted:1 r1:11 produce:1 leave:1 converges:1 develop:1 ac:2 qt:5 come:1 implies:6 direction:1 stochastic:1 generalization:1 preliminary:1 extension:2 hold:1 considered:2 mapping:2 claim:1 achieves:3 a2:1 applicable:1 radio:1 vice:1 reflects:1 instantaneously:1 hope:1 rather:1 kalai:3 publication:1 contrast:1 adversarial:5 typically:4 entire:1 bandit:1 relation:1 selects:1 issue:1 colt:1 pascal:1 plan:1 integration:1 once:2 x4:2 represents:1 icml:2 future:3 np:2 gordon:1 inherent:1 oblivious:2 few:3 deviating:1 ourselves:1 attempt:1 onwards:1 huge:1 kd1:2 stationarity:1 myopic:1 integral:1 unless:2 instance:1 hastad:2 assignment:4 cost:3 deviation:1 delay:1 motivating:1 chooses:4 st:6 ie:2 stay:2 quickly:1 thesis:1 satisfied:2 nm:2 choose:2 possibly:1 literal:1 worse:1 expert:34 return:1 account:2 satisfy:2 idealized:3 multiplicative:2 view:1 later:1 eyal:1 analyze:1 traffic:1 satisfiable:2 minimize:1 il:2 efficiently:1 trajectory:1 definition:4 against:1 evaluates:1 kct:1 naturally:1 proof:14 associated:2 knowledge:3 subsection:1 actually:2 ea:2 feed:2 tension:1 response:3 done:2 box:2 furthermore:5 just:1 retrospect:1 receives:1 defines:1 mdp:11 aviv:2 hence:2 assigned:1 symmetric:1 round:1 psa:5 during:3 game:3 recurrence:2 complete:5 performs:1 l1:2 instantaneous:1 clause:9 refer:1 versa:1 had:1 access:1 certain:5 yi:2 impose:1 shortest:1 maximize:2 period:3 relates:1 mix:1 sham:1 technical:1 long:3 divided:1 post:2 award:1 visit:1 a1:3 controlled:1 variant:1 expectation:1 iteration:8 achieved:1 c1:4 separately:1 interval:3 completes:1 recursing:1 rest:1 induced:1 call:1 near:1 noting:1 backwards:1 presence:1 identically:1 switch:1 pennsylvania:1 reduce:1 economic:1 translates:1 listening:1 s7:2 suffer:1 render:1 action:30 dar:1 useful:3 governs:1 s4:1 schapire:2 specifies:1 track:1 shall:1 ist:2 blum:1 changing:4 timestep:5 fraction:3 sum:4 compete:1 reasonable:1 decision:8 comparable:1 bound:10 ct:4 followed:2 ahead:1 infinity:1 precisely:1 x2:1 x7:2 aspect:1 argument:1 min:1 vempala:3 kd:7 remain:1 slightly:1 kakade:3 making:1 s1:5 intuitively:1 pr:1 computationally:2 equation:3 turn:1 know:3 kda:9 fed:1 generic:1 appropriate:2 denotes:1 running:1 unreasonably:1 instant:1 k1:20 objective:3 move:8 question:2 strategy:4 dependence:4 rt:40 usual:1 dp:1 sensible:1 assuming:1 favorably:1 policy:33 markov:2 finite:1 immediate:3 looking:3 y1:1 mansour:2 community:1 namely:2 trip:1 optimized:1 address:2 adversary:6 kqt:1 max:1 tau:2 including:2 difficulty:1 arm:1 ready:2 naive:2 sn:7 freund:2 loss:7 interesting:1 foundation:1 agent:10 playing:1 ibm:1 row:1 course:1 supported:1 last:3 repeat:1 arriving:1 infeasible:1 allow:2 understand:1 weaker:1 taking:4 benefit:1 xn:1 transition:7 forward:1 author:1 reinforcement:5 made:2 adaptive:1 programme:1 approximate:1 observable:1 obtains:5 keep:1 sat:2 assumed:1 sk:2 why:1 nature:3 rearranging:1 tel:2 european:1 artificially:1 da:6 main:2 motivation:1 s2:2 allowed:2 x1:3 slow:2 position:1 kdp:2 mcmahan:2 third:1 theorem:11 formula:2 specific:2 showing:1 r2:13 concern:1 derives:1 exists:2 effectively:2 ci:1 phd:1 horizon:6 logarithmic:2 simply:1 explore:1 desire:2 inapproximability:1 corresponds:2 determines:1 satisfies:4 acm:1 goal:2 change:12 hard:4 determined:1 uniformly:1 kearns:2 lemma:14 e:17 select:1 formally:2 college:1 incorporate:2 d1:6 handling:1 |
1,906 | 2,731 | Synergies between Intrinsic and Synaptic
Plasticity in Individual Model Neurons
Jochen Triesch
Dept. of Cognitive Science, UC San Diego, La Jolla, CA, 92093-0515, USA
Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
[email protected]
Abstract
This paper explores the computational consequences of simultaneous intrinsic and synaptic plasticity in individual model neurons. It proposes
a new intrinsic plasticity mechanism for a continuous activation model
neuron based on low order moments of the neuron?s firing rate distribution. The goal of the intrinsic plasticity mechanism is to enforce a sparse
distribution of the neuron?s activity level. In conjunction with Hebbian
learning at the neuron?s synapses, the neuron is shown to discover sparse
directions in the input.
1
Introduction
Neurons in the primate visual system exhibit a sparse distribution of firing rates. In particular, neurons in different visual cortical areas show an approximately exponential distribution of their firing rates in response to stimulation with natural video sequences [1]. The
brain may do this because the exponential distribution maximizes entropy under the constraint of a fixed mean firing rate. The fixed mean firing rate constraint is often considered
to reflect a desired level of metabolic costs. This view is theoretically appealing. However,
it is currently not clear how neurons adjust their firing rate distribution to become sparse.
Several different mechanisms seem to play a role: First, synaptic learning can change a
neuron?s response to a distribution of inputs. Second, intrinsic learning may change conductances in the dendrites and soma to adapt the distribution of firing rates [7]. Third,
non-linear lateral interactions in a network can make a neuron?s responses more sparse [8].
In the extreme case this leads to winner-take-all networks, which form a code where only
a single unit is active for any given stimulus. Such ultra-sparse codes are considered inefficient, however. This paper investigates the interaction of intrinsic and synaptic learning
processes in individual model neurons in the learning of sparse codes.
We consider an individual continuous activation model neuron with a non-linear transfer
function that has adjustable parameters. We are proposing a simple intrinsic learning mechanism based on estimates of low-order moments of the activity distribution that allows the
model neuron to adjust the parameters of its non-linear transfer function to obtain an approximately exponential distribution of its activity. We then show that if combined with a
standard Hebbian learning rule employing multiplicative weight normalization, this leads
to the extraction of sparse features from the input. This is in sharp contrast to standard
Hebbian learning in linear units with multiplicative weight normalization, which leads to
the extraction of the principal Eigenvector of the input correlation matrix. We demonstrate
the behavior of the combined intrinsic and synaptic learning mechanisms on the classic
bars problem [4], a non-linear independent component analysis problem.
The remainder of this paper is organized as follows. Section 2 introduces our scheme for intrinsic plasticity and presents experiments demonstrating the effectiveness of the proposed
mechanism for inducing a sparse firing rate distribution. Section 3 studies the combination
of intrinsic plasticity with Hebbian learning at the synapses and demonstrates how it gives
rise to the discovery of sparse directions in the input. Finally, Sect. 4 discusses our findings
in the context of related work.
2
Intrinsic Plasticity Mechanism
Biological neurons do not only adapt synaptic properties but also change their excitability through the modification of voltage gated channels. Such intrinsic plasticity has been
observed across many species and brain areas [9]. Although our understanding of these
processes and their underlying mechanisms remains quite unclear, it has been hypothesized
that this form of plasticity contributes to a neuron?s homeostasis of its mean firing rate level.
Our basic hypothesis is that the goal of intrinsic plasticity is to ensure an approximately exponential distribution of firing rate levels in individual neurons. To our knowledge, this
idea was first investigated in [7], where a Hodgkin-Huxley style model with a number of
voltage gated conductances was considered. A learning rule was derived that adapts the
properties of voltage gated channels to match the firing rate distribution of the unit to a
desired distribution. In order to facilitate the simulation of potentially large networks we
choose a different, more abstract level of modeling employing a continuous activation unit
with a non-linear transfer function. Our model neuron is described by:
Y = S? (X) ,
X = wT u ,
(1)
where Y is the neuron?s output (firing rate), X is the neuron?s total synaptic current, w
is the neuron?s weight vector representing synaptic strengths, the vector u represents the
pre-synaptic input, and S? (.) is the neuron?s non-linear transfer function (activation function), parameterized by a vector of parameters ?. In this section we will not be concerned
with synaptic mechanism changing the weight vector w, so we will just consider a particular distribution p(X = x) ? p(x) of the net synaptic current and consider the resulting
distribution of firing rates p(Y = y) ? p(y). Intrinsic plasticity is modeled as inducing
changes to the non-linear transfer function with the goal of bringing the distribution of
activity levels p(y) close to an exponential distribution.
In general terms, the problem is that of matching a distribution to another. Given a signal
with a certain distribution, find a non-linear transfer function that converts the signal to
one with a desired distribution. In image processing, this is typically called histogram
matching. If there are no restrictions on the non-linearity then a solution can always be
found. The standard example is histogram equalization, where a signal is passed through
its own cumulative density function to give a uniform distribution over the interval [0, 1].
While this approach offers a general solution, it is unclear how individual neurons could
achieve this goal. In particular, it requires that the individual neuron can change its nonlinear transfer function arbitrarily, i.e. it requires infinitely many degrees of freedom.
2.1
Intrinsic Plasticity Based on Low Order Moments of Firing Rate
In contrast to the general scheme outlined above the approach proposed here utilizes a
simple sigmoid non-linearity with only two adjustable parameters a and b:
1
Sab (X) =
.
(2)
1 + exp (? (X ? b) /a)
Parameter a > 0 changes the steepness of the sigmoid, while parameter b shifts it
left/right1 . Qualitatively similar changes in spike threshold and slope of the activation
function have been observed in cortical neurons. Since the non-linearity has only two degrees of freedom it is generally not possible to ascertain an exponential activity distribution
for an arbitrary input distribution. A plausible alternative goal is to just match low order
moments of the activity distribution to those of a specific target distribution. Since our
sigmoid non-linearity has two parameters, we consider the first and second moments.
For a random variable T following an exponential distribution with mean ? we have:
p(T = t) =
1
exp (?t/?) ;
?
MT1 ? hT i = ? ;
MT2 ? T 2 = 2?2 ,
(3)
where h.i denotes the expected value operator. Our intrinsic plasticity rule is formulated as
a set of simple
proportional control laws for a and b that drive the first and second moments
hY i and Y 2 of the output distributions to the values of the corresponding moments of an
exponential distribution MT1 and MT2 :
(4)
a? = ? Y 2 ? 2?2 , b? = ? (hY i ? ?) ,
where ? and ? are learning rates. The mean ? of the desired exponential distribution is
a free parameter which may vary across cortical areas. Equations (4) describe a system
of coupled integro-differential equations where
the integration is implicit in the expected
value operations. Note that both hY i and Y 2 depend on the sigmoid parameters a and
b. From (4) it is obvious that there is a stationary point of these dynamics if the first and
second moment of Y equal the desired values of ? and 2?2 , respectively.
The first and second moments of Y
needto be estimated online. In our model, we calculate
? 1 and M
? 2 of hY i and Y 2 according to:
estimates M
Y
Y
?? 1 = ?(y ? M
?1) ,
M
Y
Y
?? 2 = ?(y 2 ? M
?2) ,
M
Y
Y
(5)
where ? is a small learning rate.
2.2
Experiments with Intrinsic Plasticity Mechanism
We tested the proposed intrinsic plasticity mechanism for a number of distributions of
the synaptic current X (Fig. 1). Consider the case where this current follows a Gaussian
distribution with zero mean and
unit variance: X ? N (0, 1). Under this assumption we can
calculate the moments hY i and Y 2 (although only numerically) for any particular values
of a and b. Panel a in Fig. 1 shows a phase diagram of this system. Its flow field is sketched
and two sample trajectories converging to a stationary point are
The stationary point
given.
is at the intersection of the nullclines where hY i = ? and Y 2 = 2?2 . Its coordinates
are a? ? 0.90, b? ? 2.38. Panel b compares the theoretically optimal transfer function
(dotted), which would lead to an exactly exponential distribution of Y , with the learned
sigmoidal transfer function (solid). The learned transfer function gives a very good fit.
The resulting distribution of Y is in fact very close to the desired exponential distribution.
For the general case of a Gaussian input distribution with mean ?G and standard deviation
?G , the sigmoid parameters will converge to a ? a? ?G and b ? b? ?G + ?G under the
intrinsic plasticity rule. If the input to the unit can be assumed to be Gaussian, this relation
can be used to calculate the desired parameters of the sigmoid non-linearity directly.
1
Note that while we view adjusting a and b as changing the shape of the sigmoid non-linearity,
an equivalent view is that a and b are used to linearly rescale the signal X before it is passed through
a ?standard? logistic function. In general, however, intrinsic plasticity may give rise to non-linear
changes that cannot be captured by such a linear re-scaling of all weights.
a
b
10
1
8
input distribution
optimal transfer fct.
learned transfer fct.
0.8
6
b
0.6
4
0.4
2
0.2
0
0
1
2
3
4
0
?4
5
?2
0
2
4
a
c
d
input distribution
optimal transfer fct.
learned transfer fct.
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0.2
0.4
0.6
0.8
1
0
0
input distribution
optimal transfer fct.
learned transfer fct.
0.2
0.4
0.6
0.8
1
Figure 1: Dynamics of intrinsic plasticity mechanism for various input distributions. a,b:
Gaussian input distribution. Panel a shows the phase plane diagram. Arrows indicate
the flow field of the system. Dotted lines indicate approximate locations of the nullclines
(found numerically).Two example trajectories are exhibited which converge to the stationary point (marked with a circle). Panel b shows the optimal (dotted) and learned transfer function (solid). The Gaussian input distribution (dashed, not drawn to scale) is also
shown. c,d: same as b but for uniform and exponential input distribution. Parameters were
? = 0.1, ? = 5 ? 10?4 , ? = 2 ? 10?3 , ? = 10?3 .
Panels c and d show the result of intrinsic plasticity for two other input distributions. In
the case of a uniform input distribution in the interval [0, 1] (panel c) the optimal transfer
function becomes infinitely steep for x ? 1. For an exponentially distributed input (panel
d), the ideal transfer function would simply be the identity function. In both cases the
intrinsic plasticity mechanism adjusts the sigmoid non-linearity in a sensible fashion and
the output distribution is a fair approximation of the desired exponential distribution.
2.3
Discussion of the Intrinsic Plasticity Mechanism
The proposed mechanism for intrinsic plasticity is effective in driving a neuron to exhibit
an approximately exponential distribution of firing rates as observed in biological neurons
in the visual system. The general idea is not restricted to the use of a sigmoid non-linearity.
The same adaptation mechanism can also be used in conjunction with, say, an adjustable
threshold-linear activation function. An interesting alternative to the proposed mechanism
can be derived by directly minimizing the KL divergence between the output distribution
and the desired exponential distribution through stochastic gradient descent. The resulting
learning rule, which is closely related to a rule for adapting a sigmoid nonlinearity to max-
imize the output entropy derived by Bell and Sejnowski[2], will be discussed elsewhere. It
leads to very similar results to the ones presented here.
A biological implementation of the proposed mechanism is plausible. All that is needed are
estimates of the first and second moment of the firing rate distribution. A specific, testable
prediction of the simple model is that changes to the distribution of a neuron?s firing rate
levels that keep the average firing rate of the neuron unchanged but alter the second moment
of the firing rate distribution should lead to measurable changes in the neuron?s excitability.
3
Combination of Intrinsic and Synaptic Plasticity
In this Section we want to study the effects of simultaneous intrinsic and synaptic learning for an individual model neuron. Synaptic learning is typically modeled with Hebbian
learning rules, of which a large number are being used in the literature. In principle, any
Hebbian learning rule can be combined with our scheme for intrinsic plasticity. Due to
space limitations, we only consider the simplest of all Hebbian learning rules:
?w = ?uY (u) = ?uSab (wT u) ,
(6)
where the notation is identical to that of Sec. 2 and ? is a learning rate. This learning rule
is unstable and needs to be accompanied by a scheme limiting weight growth. We simply
adopt a multiplicative normalization scheme that after each update re-scales the weight
vector to unit length: w ? w/|| w ||.
3.1
Analysis for the Limiting Case of Fast Intrinsic Plasticity
Under a few assumptions, an interesting intuition about the simultaneous intrinsic and Hebbian learning can be gained. Consider the limit of intrinsic plasticity being much faster than
Hebbian plasticity. This may not be very plausible biologically, but it allows for an interesting analysis. In this case we may assume that the non-linearity has adapted to give an
approximately exponential distribution of the firing rate Y before w can change much.
Thus, from (6), ?w can be seen as a weighted sum of the inputs u, with the activities
Y acting as weights that follow an approximately exponential distribution. Since similar
inputs u will produce similar outputs Y , the expected value of the weight update h?wi
will be dominated by a small set of inputs that produce the highest output activities. The
remainder of the inputs will ?pull? the weight vector back to the average input hui. Due
to the multiplicative weight normalization, the stationary states of the weight vector are
reached if ?w is parallel to w, i.e., if h?wi = kw for some constant k.
A simple example shall illustrate the effect of intrinsic plasticity on Hebbian learning in
more detail. Consider the case where there are only two clusters of inputs at the locations
c1 and c2 . Let us also assume that both clusters account for exactly half of the inputs. If
the weight vector is slightly closer to one of the two clusters, inputs from this cluster will
activate the unit more strongly and will exert a stronger ?pull? on the weight vector. Let
m = ? ln(2) denote the median of the exponential firing rate distribution with mean ?.
Then inputs from the closer cluster, say c1 , will be responsible for all activities above m
while the inputs from the other cluster will be responsible for all activities below m. Hence,
the expected value of the weight update h?wi will be given by:
Z ?
Z m
y
y
h?wi ? ?c1
exp(?y/?)dy + ?c2
exp(?y/?)dy
(7)
?
?
m
0
??
=
((1 + ln 2) c1 + (1 ? ln 2) c2 ) .
(8)
2
Taking the multiplicative weight normalization into account, we see that the weight vector
?3
x 10
0
10
6
?1
frequency
contribution to weight vector fi
8
4
10
?2
10
2
?3
0
0
10
200
400
600
cluster number i
800
1000
0
2
4
6
contribution to weight vector f
i
8
?3
x 10
Figure 2: Left: relative contributions to the weight vector fi for N = 1000 input clusters
(sorted). Right: the distribution of the fi is approximately exponential.
will converge to either of the following two stationary states:
(1 ? ln 2)c1 + (1 ? ln 2)c2
w=
.
(9)
|| (1 ? ln 2)c1 + (1 ? ln 2)c2 ||
The weight vector moves close to one of the two clusters but does not fully commit to it.
For the general case of N input clusters, only a few clusters will strongly contribute to the
final weight vector. Generalizing the result from above, it is not difficult to derive that the
weight vector w will be proportional to a weighted sum of the cluster centers:
w?
N
X
fi ci ; with fi = 1 + log(N ) ? i log(i) + (i ? 1) log(i ? 1) ,
(10)
i=1
where we define 0 log(0) ? 0. Here, fi denotes the relative contribution of the i-th closest
input cluster to the final weight vector. There can be at most N ! resulting weight vectors
owing to the number of possible assignments of the fi to the clusters. Note that the final
weight vector does not depend on the desired mean activity level ?. Fig. 2 plots (10)
for N = 1000 (left) and shows that the resulting distribution of the fi is approximately
exponential (right).
We can see why such a weight vector may correspond to a sparse direction in the input space
as follows: consider the case where the input cluster centers are random vectors of unit
length in a high-dimensional space. It is a property of high-dimensional spaces that random
vectors are approximately orthogonal, so that cTi cj ? ?ij , where ?ij is the Kronecker delta.
If we consider the projection ofPan input from an arbitrary cluster, say cj , onto the weight
T
T
vector, we see that wT cj ?
i fi ci cj ? fj . The distribution of X = w u follows
the distribution of the fi , which is approximately exponential. Thus, the projection of all
inputs onto the weight vector has an approximately exponential distribution. Note that this
behavior is markedly different from Hebbian learning in a linear unit which leads to the
extraction of the principal eigenvector of the input correlation matrix.
It is interesting to note that in this situation the optimal transfer function S ? that will make
the unit?s activity Y have an exponential distribution of a desired mean ? is simply a multiplication with a constant k, i.e. S ? (X) = kX. Thus, depending on the initial weight vector
and the resulting distribution of X, the neuron?s activation function may transiently adapt
to enforce an approximately exponential firing rate distribution, but the simultaneous Hebbian learning drives it back to a linear form. In the end, a simple linear activation function
may result from this interplay of intrinsic and synaptic plasticity. In fact, the observation
of approximately linear activation functions in cortical neurons is not uncommon.
activity
1
0.5
0
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
4
x 10
input patterns/10000
Figure 3: Left: example stimuli from the ?bars? problem for a 10 by 10 pixel retina. Right:
the activity record shows the unit?s response to every 10th input pattern. Below, we show
the learned weight vector after presentation of 10,000, 20,000, and 30,000 training patterns.
3.2
Application to the ?Bars? Problem
The ?bars? problem is a standard problem for unsupervised learning architectures [4]. It
is a non-linear ICA problem for which traditional ICA approaches have been shown to
fail [5]. The input domain consists of an N -by-N retina. On this retina, all horizontal
and vertical bars (2N in total) can be displayed. The presence or absence of each bar is
determined independently, with every bar occurring with the same probability p (in our
case p = 1/N ). If a horizontal and a vertical bar overlap, the pixel at the intersection point
will be just as bright as any other pixels on the bars, rather than twice as bright. This makes
the problem a non-linear ICA problem. Example stimuli from the bars dataset are shown
in Fig. 3 (left). Note that we normalize input vectors to unit length. The goal of learning in
the bars problem is to find the independent sources of the images, i.e., the individual bars.
Thus, the neural learning system should develop filters that represent the individual bars.
We have trained an individual sigmoidal model neuron on the bars input domain. The
theoretical analysis above assumed that intrinsic plasticity is much faster than synaptic
plasticity. Here, we set the intrinsic plasticity to be slower than the synaptic plasticity,
which is more plausible biologically, to see if this may still allow the discovery of sparse
directions in the input. As illustrated in Fig. 3 (right) the unit?s weight vector aligns with
one of the individual bars as soon as the intrinsic plasticity has pushed the model neuron
into a regime where its responses are sparse: the unit has discovered one of the independent
sources of the input domain. This result is robust if the desired mean activity ? of the unit
is changed over a wide range. If ? is reduced from its default value (1/2N = 0.05)
over several orders of magnitude (we tried down to 10?5 ) the result remains unchanged.
However, if ? is increased above about 0.15, the unit will fail to represent an individual
bar but will learn a mixture of two or more bars, with different bars being represented with
different strengths. Thus, in this example ? in contrast to the theoretical result above ?
the desired mean activity ? does influence the weight vector that is being learned. The
reason for this is that the intrinsic plasticity only imperfectly adjusts the output distribution
to the desired exponential shape. As can be seen in Fig. 3 the output has a multimodal
structure. For low ?, only the highest mode, which corresponds to a specific single bar
presented in isolation, contributes strongly to the weight vector.
4
Discussion
Biological neurons are highly adaptive computation devices. While the plasticity of a neuron?s synapses has always been a core topic of neural computation research, there has been
little work investigating the computational properties of intrinsic plasticity mechanisms and
the relation between intrinsic and synaptic learning. This paper has investigated the potential role of intrinsic learning mechanisms operating at the soma when used in conjunction
with Hebbian learning at the synapses. To this end, we have proposed a new intrinsic plasticity mechanism that adjusts the parameters of a sigmoid nonlinearity to move the neuron?s
firing rate distribution to a sparse regime. The learning mechanism is effective in producing approximately exponential firing rate distributions as observed in neurons in the visual
system of cats and primates. Studying simultaneous intrinsic and synaptic learning, we
found a synergistic relation between the two. We demonstrated how the two mechanisms
may cooperate to discover sparse directions in the input. When applied to the classic ?bars?
problem, a single unit was shown to discover one of the independent sources as soon as the
intrinsic plasticity moved the unit?s activity distribution into a sparse regime. Thus, this research is related to other work in the area of Hebbian projection pursuit and Hebbian ICA,
e.g., [3, 6]. In such approaches, the ?standard? Hebbian weight update rule is modified to
allow the discovery of non-gaussian directions in the input. We have shown that the combination of intrinsic plasticity with the standard Hebbian learning rule can be sufficient for
the discovery of sparse directions in the input. Future work will analyze the combination
of intrinsic plasticity with other Hebbian learning rules. Further, we would like to consider
networks of such units and the formation of map-like representations. The nonlinear nature
of the transfer function may facilitate the construction of hierarchical networks for unsupervised learning. It will also be interesting to study the effects of intrinsic plasticity in the
context of recurrent networks, where it may contribute to keeping the network in a certain
desired dynamic regime.
Acknowledgments
The author is supported by the National Science Foundation under grants NSF 0208451
and NSF 0233200. I thank Erik Murphy-Chutorian and Emanuel Todorov for discussions
and comments on earlier drafts.
References
[1] R. Baddeley, L. F. Abbott, M.C. Booth, F. Sengpiel, and T. Freeman. Responses of neurons in
primary and inferior temporal visual cortices to natural scenes. Proc. R. Soc. London, Ser. B,
264:1775?1783, 1998.
[2] A. J. Bell and T. J. Sejnowski. An information-maximization approach to blind separation and
blind deconvolution. Neural Computation, 7:1129?1159, 1995.
[3] B. S. Blais, N. Intrator, H. Shouval, and L. N. Cooper. Receptive field formation in natural scene
environments. Neural Computation, 10:1797?1813, 1998.
[4] P. F?oldi?ak. Forming sparse representations by local anti-hebbian learning. Biological Cybernetics, 64:165?170, 1990.
[5] S. Hochreiter and J. Schmidhuber. Feature extraction through LOCOCODE. Neural Computation, 11(3):679?714, 1999.
[6] A. Hyv?arinen and E. Oja. Independent component analysis by general nonlinear hebbian-like
learning rules. Signal Processing, 64(3):301?313, 1998.
[7] M. Stemmler and C. Koch. How voltage-dependent conductances can adapt to maximize the
information encoded by neuronal firing rate. Nature Neuroscience, 2(6):521?527, 1999.
[8] W. E. Vinje and J. L. Gallant. Sparse coding and decorrelation in primary visual cortex during
natural vision. Science, 287:1273?1276, 2000.
[9] W. Zhang and D. J. Linden. The other side of the engram: Experience-driven changes in neuronal
intrinsic excitability. Nature Reviews Neuroscience, 4:885?900, 2003.
| 2731 |@word stronger:1 hyv:1 simulation:1 tried:1 solid:2 moment:12 initial:1 current:4 activation:9 plasticity:42 shape:2 plot:1 update:4 stationary:6 half:1 device:1 plane:1 core:1 record:1 draft:1 contribute:2 location:2 sigmoidal:2 zhang:1 c2:5 become:1 differential:1 consists:1 theoretically:2 ica:4 expected:4 behavior:2 brain:2 freeman:1 little:1 becomes:1 discover:3 linearity:9 underlying:1 maximizes:1 panel:7 notation:1 eigenvector:2 proposing:1 finding:1 temporal:1 every:2 growth:1 exactly:2 demonstrates:1 ser:1 control:1 unit:20 grant:1 producing:1 before:2 local:1 limit:1 consequence:1 ak:1 triesch:2 firing:25 approximately:14 exert:1 twice:1 range:1 uy:1 acknowledgment:1 responsible:2 integro:1 area:4 bell:2 adapting:1 matching:2 projection:3 pre:1 cannot:1 close:3 synergistic:1 operator:1 onto:2 context:2 influence:1 equalization:1 restriction:1 equivalent:1 measurable:1 demonstrated:1 center:2 map:1 independently:1 rule:14 adjusts:3 pull:2 classic:2 coordinate:1 limiting:2 diego:1 play:1 target:1 construction:1 hypothesis:1 observed:4 role:2 calculate:3 sect:1 highest:2 intuition:1 environment:1 dynamic:3 trained:1 depend:2 multimodal:1 various:1 represented:1 cat:1 shouval:1 stemmler:1 fast:1 describe:1 effective:2 sejnowski:2 activate:1 london:1 formation:2 quite:1 encoded:1 plausible:4 say:3 commit:1 final:3 online:1 interplay:1 sequence:1 net:1 interaction:2 remainder:2 adaptation:1 achieve:1 adapts:1 inducing:2 moved:1 normalize:1 cluster:16 produce:2 illustrate:1 derive:1 depending:1 develop:1 recurrent:1 ij:2 rescale:1 soc:1 indicate:2 direction:7 closely:1 owing:1 filter:1 stochastic:1 arinen:1 ultra:1 biological:5 koch:1 considered:3 exp:4 driving:1 vary:1 adopt:1 proc:1 currently:1 homeostasis:1 weighted:2 always:2 gaussian:6 modified:1 rather:1 sab:1 nullclines:2 voltage:4 conjunction:3 sengpiel:1 derived:3 contrast:3 am:1 dependent:1 typically:2 relation:3 germany:1 sketched:1 pixel:3 proposes:1 integration:1 uc:1 equal:1 field:3 extraction:4 identical:1 represents:1 kw:1 unsupervised:2 jochen:1 alter:1 future:1 stimulus:3 transiently:1 few:2 retina:3 oja:1 divergence:1 national:1 individual:13 murphy:1 phase:2 freedom:2 conductance:3 highly:1 adjust:2 introduces:1 uncommon:1 extreme:1 mixture:1 closer:2 experience:1 orthogonal:1 desired:15 re:2 circle:1 theoretical:2 increased:1 modeling:1 earlier:1 assignment:1 maximization:1 cost:1 deviation:1 imperfectly:1 uniform:3 combined:3 density:1 explores:1 reflect:1 choose:1 cognitive:1 inefficient:1 style:1 account:2 potential:1 accompanied:1 sec:1 coding:1 blind:2 multiplicative:5 view:3 analyze:1 reached:1 parallel:1 slope:1 contribution:4 bright:2 variance:1 correspond:1 trajectory:2 drive:2 cybernetics:1 simultaneous:5 synapsis:4 synaptic:20 aligns:1 frequency:1 obvious:1 emanuel:1 dataset:1 adjusting:1 knowledge:1 lococode:1 organized:1 cj:4 back:2 follow:1 response:6 strongly:3 just:3 implicit:1 correlation:2 horizontal:2 nonlinear:3 logistic:1 mode:1 usa:1 effect:3 hypothesized:1 facilitate:2 hence:1 excitability:3 illustrated:1 during:1 inferior:1 demonstrate:1 fj:1 cooperate:1 image:2 fi:10 sigmoid:11 chutorian:1 stimulation:1 winner:1 exponentially:1 discussed:1 numerically:2 frankfurt:2 outlined:1 nonlinearity:2 cortex:2 operating:1 closest:1 own:1 jolla:1 driven:1 schmidhuber:1 certain:2 arbitrarily:1 captured:1 seen:2 converge:3 maximize:1 dashed:1 signal:5 hebbian:20 match:2 adapt:4 faster:2 offer:1 prediction:1 converging:1 basic:1 vision:1 histogram:2 represent:2 normalization:5 hochreiter:1 c1:6 want:1 interval:2 diagram:2 median:1 source:3 bringing:1 exhibited:1 markedly:1 comment:1 flow:2 seem:1 effectiveness:1 presence:1 ideal:1 concerned:1 todorov:1 fit:1 isolation:1 architecture:1 idea:2 shift:1 passed:2 generally:1 clear:1 simplest:1 reduced:1 nsf:2 dotted:3 estimated:1 delta:1 neuroscience:2 shall:1 steepness:1 soma:2 demonstrating:1 threshold:2 drawn:1 changing:2 abbott:1 ht:1 mt2:2 convert:1 sum:2 parameterized:1 hodgkin:1 utilizes:1 separation:1 dy:2 scaling:1 investigates:1 pushed:1 activity:17 strength:2 adapted:1 constraint:2 huxley:1 kronecker:1 scene:2 hy:6 dominated:1 fct:6 according:1 combination:4 across:2 ascertain:1 slightly:1 wi:4 appealing:1 primate:2 modification:1 biologically:2 restricted:1 ln:7 equation:2 remains:2 discus:1 mechanism:23 fail:2 needed:1 end:2 studying:1 pursuit:1 operation:1 hierarchical:1 intrator:1 enforce:2 alternative:2 slower:1 denotes:2 ensure:1 imize:1 testable:1 unchanged:2 move:2 spike:1 receptive:1 primary:2 traditional:1 unclear:2 exhibit:2 gradient:1 thank:1 lateral:1 sensible:1 topic:1 unstable:1 reason:1 erik:1 code:3 length:3 modeled:2 minimizing:1 difficult:1 steep:1 potentially:1 rise:2 implementation:1 adjustable:3 gated:3 gallant:1 vertical:2 neuron:41 observation:1 descent:1 oldi:1 displayed:1 anti:1 situation:1 blais:1 discovered:1 ucsd:1 sharp:1 arbitrary:2 kl:1 learned:8 bar:20 below:2 pattern:3 regime:4 max:1 video:1 overlap:1 decorrelation:1 natural:4 advanced:1 representing:1 scheme:5 mt1:2 coupled:1 review:1 understanding:1 discovery:4 literature:1 multiplication:1 relative:2 law:1 engram:1 fully:1 interesting:5 limitation:1 proportional:2 vinje:1 foundation:1 degree:2 sufficient:1 principle:1 metabolic:1 elsewhere:1 changed:1 supported:1 free:1 soon:2 keeping:1 side:1 allow:2 institute:1 wide:1 taking:1 sparse:19 distributed:1 default:1 cortical:4 cumulative:1 author:1 qualitatively:1 adaptive:1 san:1 employing:2 approximate:1 synergy:1 keep:1 active:1 investigating:1 assumed:2 continuous:3 why:1 nature:3 channel:2 transfer:21 robust:1 ca:1 learn:1 contributes:2 dendrite:1 investigated:2 domain:3 main:1 linearly:1 arrow:1 fair:1 neuronal:2 fig:6 fashion:1 cooper:1 exponential:26 third:1 down:1 specific:3 linden:1 deconvolution:1 intrinsic:47 gained:1 hui:1 ci:2 magnitude:1 occurring:1 kx:1 booth:1 entropy:2 intersection:2 generalizing:1 simply:3 infinitely:2 forming:1 visual:6 corresponds:1 cti:1 goal:6 formulated:1 marked:1 identity:1 sorted:1 presentation:1 absence:1 change:12 determined:1 wt:3 acting:1 principal:2 total:2 specie:1 called:1 la:1 right1:1 dept:1 baddeley:1 tested:1 |
1,907 | 2,732 | Joint Tracking of Pose, Expression, and Texture
using Conditionally Gaussian Filters
Tim K. Marks
John Hershey
Department of Cognitive Science
University of California San Diego
La Jolla, CA 92093-0515
[email protected]
[email protected]
J. Cooper Roddey Javier R. Movellan
Institute for Neural Computation
University of California San Diego
La Jolla, CA 92093-0523
[email protected]
[email protected]
Abstract
We present a generative model and stochastic filtering algorithm for simultaneous tracking of 3D position and orientation, non-rigid motion,
object texture, and background texture using a single camera. We show
that the solution to this problem is formally equivalent to stochastic filtering of conditionally Gaussian processes, a problem for which well
known approaches exist [3, 8]. We propose an approach based on Monte
Carlo sampling of the nonlinear component of the process (object motion) and exact filtering of the object and background textures given the
sampled motion. The smoothness of image sequences in time and space
is exploited by using Laplace?s method to generate proposal distributions
for importance sampling [7]. The resulting inference algorithm encompasses both optic flow and template-based tracking as special cases, and
elucidates the conditions under which these methods are optimal. We
demonstrate an application of the system to 3D non-rigid face tracking.
1
Background
Recent algorithms track morphable objects by solving optic flow equations, subject to the
constraint that the tracked points belong to an object whose non-rigid deformations are
linear combinations of a set of basic shapes [10, 2, 11]. These algorithms require precise
initialization of the object pose and tend to drift out of alignment on long video sequences.
We present G-flow, a generative model and stochastic filtering formulation of tracking that
address the problems of initialization and error recovery in a principled manner.
We define a non-rigid object by the 3D locations of n vertices. The object is a linear combination of k fixed morph bases, with coefficients c = [c1 , c2 , ? ? ? , ck ]T . The fixed 3 ? k
matrix hi contains the position of the ith vertex in all k morph bases. The transformation
from object-centered to image coordinates consists of a rotation, weak perspective projection, and translation. Thus xi , the 2D location of the ith vertex on the image plane, is
xi = grhi c + l,
(1)
where r is the 3 ? 3 rotation matrix, l is the 2 ? 1 translation vector, and g = 10 01 00 is the
projection matrix. The object pose, ut , comprises both the rigid motion parameters and the
morph parameters at time t:
ut = {r(t), l(t), c(t)}.
(2)
1.1
Optic flow
Let yt represent the current image, and let xi (ut ) index the image pixel that is rendered by
the ith object vertex when the object assumes pose ut . Suppose that we know ut?1 , the
pose at time t ? 1, and we want to find ut , the pose at time t. This problem can be solved
by minimizing the following form with respect to ut :
n
1X
2
[yt (xi (ut )) ? yt?1 (xi (ut?1 ))] .
(3)
u
?t = argmin
2 i=1
ut
In the special case in which the xi (ut ) are neighboring points that move with the same
2D displacement, this reduces to the standard Lucas-Kanade optic flow algorithm [9, 1].
Recent work [10, 2, 11] has shown that in the general case, this optimization problem can
be solved efficiently using the Gauss-Newton method. We will take advantage of this fact
to develop an efficient stochastic inference algorithm within the framework of G-flow.
Notational conventions Unless otherwise stated, capital letters are used for random variables, small letters for specific values taken by random variables, and Greek letters for fixed
model parameters. Subscripted colons indicate sequences: e.g., X1:t = X1 ? ? ? Xt . The
term In stands for the n ? n identity matrix, E for expected value, V ar for the covariance
matrix, and V ar?1 for the inverse of the covariance matrix (precision matrix).
2
The Generative Model for G-Flow
Figure 1: Left: a(Ut ) determines which texel (color at a vertex of the object model or a pixel of the
background model) is responsible for rendering each image pixel. Right: G-flow video generation
model: At time t, the object?s 3D pose, Ut , is used to project the object texture, Vt , into 2D. This
projection is combined with the background texture, Bt , to generate the observed image, Yt .
We model the image sequence Y as a stochastic process generated by three hidden causes,
U , V , and B, as shown in the graphical model (Figure 1, right). The m ? 1 random vector
Yt represents the m-pixel image at time t. The n ? 1 random vector Vt and the m ? 1
random vector Bt represent the n-texel object texture and the m-texel background texture,
respectively. As illustrated in Figure 1, left, the object pose, Ut , determines onto which
image pixels the object and background texels project at time t. This is formulated using
the projection function a(Ut ). For a given pose, ut , the projection a(ut ) is a block matrix,
def
a(ut ) = av (ut ) ab (ut ) . Here av (ut ), the object projection function, is an m ? n
matrix of 0s and 1s that tells onto which image pixel each object vertex projects; e.g., a 1
at row j, column i it means that the ith object point projects onto image pixel j. Matrix ab
plays the same role for background pixels. Assuming the foreground mapping is one-toone, we let ab = Im ?av (ut )av (ut )T , expressing the simple occlusion constraint that every
image pixel is rendered by object or background, but not both. In the G-flow generative
model:
Vt
Yt = a(Ut )
+ Wt
Wt ? N (0, ?w Im ),
?w > 0
Bt
(4)
Ut ? p(ut | ut?1 )
v
v
Vt = Vt?1 + Zt?1
Zt?1
? N (0, ?v ),
?v is diagonal
b
b
Bt = Bt?1 + Zt?1
Zt?1
? N (0, ?b ),
?b is diagonal
where p(ut | ut?1 ) is the pose transition distribution, and Z v , Z b , W are independent of
each other, of the initial conditions, and over time. The form of the pose distribution is left
unspecified since the algorithm proposed here does not require the pose distribution or the
pose dynamics to be Gaussian. For the initial conditions, we require that the variance of V1
and the variance of B1 are both diagonal.
Non-rigid 3D tracking is a difficult nonlinear filtering problem because changing the pose
has a nonlinear effect on the image pixels. Fortunately, the problem has a rich structure
that we can exploit: under the G-flow model, video generation is a conditionally Gaussian
process [3, 6, 4, 5]. If the specific values taken by the pose sequence, u1:t , were known,
then the texture processes, V and B, and the image process, Y , would be jointly Gaussian.
This suggests the following scheme: we could use particle filtering to obtain a distribution
of pose experts (each expert corresponds to a highly probable sample of pose, u1:t ). For
each expert we could then use Kalman filtering equations to infer the posterior distribution
of texture given the observed images. This method is known in the statistics community as
a Monte Carlo filtering solution for conditionally Gaussian processes [3, 4], and in the machine learning community as Rao-Blackwellized particle filtering [6, 5]. We found that in
addition to Rao-Blackwellization, it was also critical to use Laplace?s method to generate
the proposal distributions for importance sampling [7]. In the context of G-flow, we accomplished this by performing an optic flow-like optimization, using an efficient algorithm
similar to those in [10, 2].
3
Inference
Our goal is to find an expression for the filtering distribution, p(ut , vt , bt | y1:t ). Using the
law of total probability, we have the following equation for the filtering distribution:
Z
p(ut , vt , bt | y1:t ) = p(ut , vt , bt | u1:t?1 , y1:t ) p(u1:t?1 | y1:t ) du1:t?1
(5)
|
{z
}|
{z
}
Opinion
Credibility
of expert
of expert
We can think of the integral in (5) as a sum over a distribution of experts, where each expert
corresponds to a single pose history, u1:t?1 . Based on its hypothesis about pose history,
each expert has an opinion about the current pose of the object, Ut , and the texture maps
of the object and background, Vt and Bt . Each expert also has a credibility, a scalar that
measures how well the expert?s opinion matches the observed image yt . Thus, (5) can be
interpreted as follows: The filtering distribution at time t is obtained by integrating over the
entire ensemble of experts the opinion of each expert weighted by that expert?s credibility.
The opinion distribution of expert u1:t?1 can be factorized into the expert?s opinion about
the pose Ut times the conditional distribution of texture Vt , Bt given pose:
p(ut , vt , bt | u1:t?1 , y1:t ) = p(ut | u1:t?1 , y1:t ) p(vt , bt | u1:t , y1:t )
(6)
|
{z
} |
{z
} |
{z
}
Opinion
Pose Opinion Texture Opinion
of expert
given pose
The rest of this section explains how we evaluate each term in (5) and (6). We cover the
distribution of texture given pose in 3.1, pose opinion in 3.2, and credibility in 3.3.
3.1
Texture opinion given pose
The distribution of Vt and Bt given the pose history u1:t is Gaussian with mean and covariance that can be obtained using the Kalman filter estimation equations:
?1
V ar?1 (Vt , Bt | u1:t , y1:t ) = V ar?1 (Vt , Bt | u1:t?1 , y1:t?1 ) + a(ut )T ?w
a(ut )
(7)
E(Vt , Bt | u1:t , y1:t ) = V ar(Vt , Bt | u1:t , y1:t )
?1
? V ar?1 (Vt , Bt | u1:t?1 , y1:t?1 )E(Vt , Bt | u1:t?1 , y1:t?1 ) + a(ut )T ?w
yt (8)
This requires p(Vt , Bt |u1:t?1 , y1:t?1 ), which we get from the Kalman prediction equations:
E(Vt , Bt | u1:t?1 , y1:t?1 ) = E(Vt?1 , Bt?1 | u1:t?1 , y1:t?1 )
V ar(Vt , Bt | u1:t?1 , y1:t?1 ) = V ar(Vt?1 , Bt?1 | u1:t?1 , y1:t?1 ) +
(9)
?v
0
0
?b
(10)
In (9), the expected value E(Vt , Bt | u1:t?1 , y1:t?1 ) consists of texture maps (templates)
for the object and background. In (10), V ar(Vt , Bt | u1:t?1 , y1:t?1 ) represents the degree
of uncertainty about each texel in these texture maps. Since this is a diagonal matrix, we
can refer to the mean and variance of each texel individually. For the ith texel in the object
texture map, we use the following notation:
?vt (i)
?tv (i)
def
= ith element of E(Vt | u1:t?1 , y1:t?1 )
def
= (i, i)th element of V ar(Vt | u1:t?1 , y1:t?1 )
Similarly, define ?bt (j) and ?tb (j) as the mean and variance of the jth texel in the background texture map. (This notation leaves the dependency on u1:t?1 and y1:t?1 implicit.)
3.2
Pose opinion
Based on its current texture template (derived from the history of poses and images up to
time t?1) and the new image yt , each expert u1:t?1 has a pose opinion, p(ut |u1:t?1 , y1:t ), a
probability distribution representing that expert?s beliefs about the pose at time t. Since the
effect of ut on the likelihood function is nonlinear, we will not attempt to find an analytical
solution for the pose opinion distribution. However, due to the spatio-temporal smoothness
of video signals, it is possible to estimate the peak and variance of an expert?s pose opinion.
3.2.1
Estimating the peak of an expert?s pose opinion
We want to estimate u
?t (u1:t?1 ), the value of ut that maximizes the pose opinion. Since
p(ut | u1:t?1 , y1:t ) =
p(y1:t?1 | u1:t?1 )
p(ut | ut?1 ) p(yt | u1:t , y1:t?1 ),
p(y1:t | u1:t?1 )
(11)
def
u
?t (u1:t?1 ) = argmax p(ut | u1:t?1 , y1:t ) = argmax p(ut | ut?1 ) p(yt | u1:t , y1:t?1 ).
ut
ut
(12)
We now need an expression for the final term in (12), the predictive distribution p(yt | u1:t , y1:t?1 ).
By integrating out the hidden texture variables from
p(yt , vt , bt | u1:t , y1:t?1 ), and using the conditional independence relationships defined by
the graphical model (Figure 1, right), we can derive:
1
m
log p(yt | u1:t , y1:t?1 ) = ? log 2? ? log |V ar(Yt | u1:t , y1:t?1 )|
2
2
n
v
2
X
1
(yt (xi (ut )) ? ?t (i))
1 X (yt (j) ? ?bt (j))2
?
?
,
(13)
v
2 i=1
?t (i) + ?w
2
?tb (j) + ?w
j6?X (ut )
where xi (ut ) is the image pixel rendered by the ith object vertex when the object assumes
pose ut , and X (ut ) is the set of all image pixels rendered by the object under pose ut .
Combining (12) and (13), we can derive
u
?t (u1:t?1 ) = argmin ? log p(ut | ut?1 )
(14)
ut
!
n
1 X [yt (xi (ut )) ? ?vt (i)]2 [yt (xi (ut )) ? ?bt (xi (ut ))]2
b
?
? log[?t (xi (ut )) + ?w ]
+
2 i=1
?tv (i) + ?w
?tb (xi (ut )) + ?w
|
{z
} |
{z
}
Foreground term
Background terms
Note the similarity between (14) and constrained optic flow (3). For example, focus on the
foreground term in (14) and ignore the weights in the denominator. The previous image
yt?1 from (3) has been replaced by ?vt (?), the estimated object texture based on the images
and poses up to time t ? 1. As in optic flow, we can find the pose estimate u
?t (u1:t?1 )
efficiently using the Gauss-Newton method.
3.2.2
Estimating the distribution of an expert?s pose opinion
We estimate the distribution of an expert?s pose opinion using a combination of Laplace?s
method and importance sampling. Suppose at time t ? 1 we are given a sample of experts
(d)
(d)
indexed by d, each endowed with a pose sequence u1:t?1 , a weight wt?1 , and the means
and variances of Gaussian distributions for object and background texture. For each expert
(d)
(d)
u1:t?1 , we use (14) to compute u
?t , the peak of the pose distribution at time t according
(d)
to that expert. Define ?
?t as the inverse Hessian matrix of (14) at this peak, the Laplace
estimate of the covariance matrix of the expert?s opinion. We then generate a set of s
(d,e)
(d)
independent samples {ut
: e = 1, ? ? ? , s} from a Gaussian distribution with mean u
?t
(d)
(d)
(d)
and variance proportional to ?
?t , g(?|?
ut , ??
?t ), where the parameter ? > 0 determines
the sharpness of the sampling distribution. (Note that letting ? ? 0 would be equivalent to
(d,e)
(d)
simply setting the new pose equal to the peak of the pose opinion, ut
=u
?t .) To find
the parameters of this Gaussian proposal distribution, we use the Gauss-Newton method,
ignoring the second of the two background terms in (14). (This term is not ignored in the
importance sampling step.)
To refine our estimate of the pose opinion we use importance sampling. We assign each
sample from the proposal distribution an importance weight wt (d, e) that is proportional to
the ratio between the posterior distribution and the proposal distribution:
s
X
wt (d, e)
(d)
(d,e)
p?(ut | u1:t?1 , y1:t ) =
?(ut ? ut ) Ps
(15)
f =1 wt (d, f )
e=1
(d,e)
(d)
(d)
(d,e)
, y1:t?1 )
(16)
(d,e)
(d)
(d)
g(ut
|u
?t , ??
?t )
(d,e) (d)
The numerator of (16) is proportional to p(ut |u1:t?1 , y1:t ) by (12), and the denominator
wt (d, e) =
p(ut
| ut?1 )p(yt | u1:t?1 , ut
of (16) is the sampling distribution.
3.3
Estimating an expert?s credibility
(d)
The credibility of the dth expert, p(u1:t?1 | y1:t ), is proportional to the product of a prior
term and a likelihood term:
(d)
(d)
p(u1:t?1 | y1:t?1 )p(yt | u1:t?1 , y1:t?1 )
(d)
p(u1:t?1 | y1:t ) =
.
(17)
p(yt | y1:t?1 )
Regarding the likelihood,
Z
Z
p(yt |u1:t?1 , y1:t?1 ) = p(yt , ut |u1:t?1 , y1:t?1 )dut = p(yt |u1:t , y1:t?1 )p(ut |ut?1 )dut
(18)
(d,e)
We already generated a set of samples {ut
: e = 1, ? ? ? , s} that estimate the pose opin(d)
ion of the dth expert, p(ut | u1:t?1 , y1:t ). We can now use these samples to estimate the
likelihood for the dth expert:
Z
(d)
(d)
(d)
p(yt | u1:t?1 , y1:t?1 ) = p(yt | u1:t?1 , ut , y1:t?1 )p(ut | ut?1 )dut
(19)
=
3.4
Z
(d)
(d)
(d)
(d)
p(yt | u1:t?1 , ut , y1:t?1 )g(ut | u
?t , ??
?t )
p(ut | ut?1 )
g(ut |
(d)
(d)
u
?t , ??
?t )
dut ?
Ps
e=1
wt (d, e)
s
Updating the filtering distribution
Once we have calculated the opinion and credibility of each expert u1:t?1 , we evaluate the
integral in (5) as a weighted sum over experts. The credibilities of all of the experts are
normalized to sum to 1. New experts u1:t (children) are created from the old experts u1:t?1
(parents) by appending a pose ut to the parent?s history of poses u1:t?1 . Every expert in the
new generation is created as follows: One parent is chosen to sire the child. The probability
of being chosen is proportional to the parent?s credibility. The child?s value of ut is chosen
at random from its parent?s pose opinion (the weighted samples described in Section 3.2.2).
4
Relation to Optic Flow and Template Matching
In basic template-matching, the same time-invariant texture map is used to track every
frame in the video sequence. Optic flow can be thought of as template-matching with a
template that is completely reset at each frame for use in the subsequent frame. In most
cases, optimal inference under G-flow involves a combination of optic flow-based and
template-based tracking, in which the texture template gradually evolves as new images
are presented. Pure optic flow and template-matching emerge as special cases.
Optic Flow as a Special Case Suppose that the pose transition probability p(ut | ut?1 )
is uninformative, that the background is uninformative, that every texel in the initial object
texture map has equal variance, V ar(V1 ) = ?In , and that the texture transition uncertainty
is very high, ?v ? diag(?). Using (7), (8), and (10), it follows that:
?vt (i) = [av (ut?1 )]T yt?1 = yt?1 (xi (ut?1 )) ,
(20)
i.e., the object texture map at time t is determined by the pixels from image yt?1 that
according to pose ut?1 were rendered by the object. As a result, (14) reduces to:
u
?t (u1:t?1 ) = argmin
ut
n
2
1 X
yt (xi (ut )) ? yt?1 (xi (ut?1 ))
2 i=1
(21)
which is identical to (3). Thus constrained optic flow [10, 2, 11] is simply a special case of
optimal inference under G-flow, with a single expert and with sampling parameter ? ? 0.
The key assumption that ?v ? diag(?) means that the object?s texture is very different in
adjacent frames. However, optic flow is typically applied in situations in which the object?s
texture in adjacent frames is similar. The optimal solution in such situations calls not for
optic flow, but for a texture map that integrates information across multiple frames.
Template Matching as a Special Case Suppose the initial texture map is known precisely, V ar(V1 ) = 0, and the texture transition uncertainty is very low, ?v ? 0. By (7),
(8), and (10), it follows that ?vt (i) = ?vt?1 (i) = ?v1 (i), i.e., the texture map does not change
over time, but remains fixed at its initial value (it is a texture template). Then (14) becomes:
u
?t (u1:t?1 ) = argmin
ut
n
X
2
yt (xi (ut )) ? ?v1 (i)
(22)
i=1
where ?v1 (i) is the ith texel of the fixed texture template. This is the error function minimized by standard template-matching algorithms. The key assumption that ?v ? 0 means
the object?s texture is constant from each frame to the next, which is rarely true in real data.
G-flow provides a principled way to relax this unrealistic assumption of template methods.
General Case In general, if the background is uninformative, then minimizing (14) results in a weighted combination of optic flow and template matching, with the weight of
each approach depending on the current level of certainty about the object template. In
addition, when there is useful information in the background, G-flow infers a model of the
background which is used to improve tracking.
Figure 2: G-flow tracking an outdoor video. Results are shown for frames 1, 81, and 620.
5
Simulations
We collected a video (30 frames/sec) of a subject in an outdoor setting who made a variety
of facial expressions while moving her head. A later motion-capture session was used to
create a 3D morphable model of her face, consisting of a set of 5 morph bases (k = 5).
Twenty experts were initialized randomly near the correct pose on frame 1 of the video
and propagated using G-flow inference (assuming an uninformative background). See
http://mplab.ucsd.edu for video. Figure 2 shows the distribution of experts for three frames.
In each frame, every expert has a hypothesis about the pose (translation, rotation, scale, and
morph coefficients). The 38 points in the model are projected into the image according to
each expert?s pose, yielding 760 red dots in each frame. In each frame, the mean of the experts gives a single hypothesis about the 3D non-rigid deformation of the face (lower right)
as well as the rigid pose of the face (rotated 3D axes, lower left). Notice G-flow?s ability to
recover from error: bad initial hypotheses are weeded out, leaving only good hypotheses.
To compare G-flow?s performance versus deterministic constrained optic flow algorithms
such as [10, 2, 11] , we used both G-flow and the method from [2] to track the same video
sequence. We ran each tracker several times, introducing small errors in the starting pose.
Figure 3: Average error over time for G-flow (green) and for deterministic optic flow [2] (blue).
Results were averaged over 16 runs (deterministic algorithm) or 4 runs (G-flow) and smoothed.
As ground truth, the 2D locations of 6 points were hand-labeled in every 20th frame. The
error at every 20th frame was calculated as the distance from these labeled locations to the
inferred (tracked) locations, averaged across several runs. Figure 3 compares this tracking
error as a function of time for the deterministic constrained optic flow algorithm and for a
20-expert version of the G-flow tracking algorithm. Notice that the deterministic system
has a tendency to drift (increase in error) over time, whereas G-flow can recover from drift.
Acknowledgments
Tim K. Marks was supported by NSF grant IIS-0223052 and NSF grant DGE-0333451 to GWC.
John Hershey was supported by the UCDIMI grant D00-10084. J. Cooper Roddey was supported by
the Swartz Foundation. Javier R. Movellan was supported by NSF grants IIS-0086107, IIS-0220141,
and IIS-0223052, and by the UCDIMI grant D00-10084.
References
[1] Simon Baker and Iain Matthews. Lucas-kanade 20 years on: A unifying framework. International Journal of Computer Vision, 56(3):221?255, 2002.
[2] M. Brand. Flexible flow for 3D nonrigid tracking and shape recovery. In CVPR, volume 1,
pages 315?322, 2001.
[3] H. Chen, P. Kumar, and J. van Schuppen. On Kalman filtering for conditionally gaussian systems with random matrices. Syst. Contr. Lett., 13:397?404, 1989.
[4] R. Chen and J. Liu. Mixture Kalman filters. J. R. Statist. Soc. B, 62:493?508, 2000.
[5] A. Doucet and C. Andrieu. Particle filtering for partially observed gaussian state space models.
J. R. Statist. Soc. B, 64:827?838, 2002.
[6] A. Doucet, N. de Freitas, K. Murphy, and S. Russell. Rao-blackwellised particle filtering for
dynamic bayesian networks. In 16th Conference on Uncertainty in AI, pages 176?183, 2000.
[7] A. Doucet, S. J. Godsill, and C. Andrieu. On sequential monte carlo sampling methods for
bayesian filtering. Statistics and Computing, 10:197?208, 2000.
[8] Zoubin Ghahramani and Geoffrey E. Hinton. Variational learning for switching state-space
models. Neural Computation, 12(4):831?864, 2000.
[9] B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo
vision. In Proceedings of the International Joint Conference on Artificial Intelligence, 1981.
[10] L. Torresani, D. Yang, G. Alexander, and C. Bregler. Tracking and modeling non-rigid objects
with rank constraints. In CVPR, pages 493?500, 2001.
[11] Lorenzo Torresani, Aaron Hertzmann, and Christoph Bregler. Learning non-rigid 3d shape from
2d motion. In Advances in Neural Information Processing Systems 16. MIT Press, 2004.
| 2732 |@word version:1 simulation:1 covariance:4 initial:6 liu:1 contains:1 freitas:1 current:4 com:1 john:2 subsequent:1 shape:3 opin:1 generative:4 leaf:1 intelligence:1 plane:1 ith:8 provides:1 location:5 blackwellized:1 c2:1 consists:2 manner:1 expected:2 blackwellization:1 mplab:2 becomes:1 project:4 estimating:3 notation:2 baker:1 maximizes:1 factorized:1 argmin:4 unspecified:1 interpreted:1 transformation:1 temporal:1 certainty:1 every:7 blackwellised:1 grant:5 switching:1 initialization:2 dut:4 suggests:1 christoph:1 averaged:2 acknowledgment:1 camera:1 responsible:1 block:1 movellan:3 displacement:1 thought:1 projection:6 matching:7 integrating:2 zoubin:1 get:1 onto:3 context:1 equivalent:2 map:11 deterministic:5 yt:35 starting:1 sharpness:1 recovery:2 pure:1 iain:1 coordinate:1 laplace:4 diego:2 suppose:4 play:1 exact:1 elucidates:1 hypothesis:5 element:2 updating:1 labeled:2 observed:4 role:1 solved:2 capture:1 russell:1 ran:1 principled:2 hertzmann:1 dynamic:2 solving:1 predictive:1 completely:1 joint:2 monte:3 cogsci:1 artificial:1 tell:1 whose:1 cvpr:2 relax:1 otherwise:1 ability:1 statistic:2 think:1 jointly:1 final:1 sequence:8 advantage:1 analytical:1 propose:1 product:1 reset:1 neighboring:1 combining:1 parent:5 p:2 rotated:1 object:39 tim:2 derive:2 develop:1 depending:1 pose:58 soc:2 involves:1 indicate:1 convention:1 greek:1 correct:1 filter:3 stochastic:5 centered:1 opinion:24 explains:1 require:3 assign:1 d00:2 probable:1 im:2 bregler:2 tracker:1 ground:1 mapping:1 matthew:1 estimation:1 integrates:1 individually:1 create:1 weighted:4 mit:1 gaussian:12 ck:1 derived:1 focus:1 ax:1 notational:1 rank:1 likelihood:4 colon:1 inference:6 contr:1 rigid:10 bt:30 entire:1 typically:1 hidden:2 relation:1 her:2 subscripted:1 pixel:13 orientation:1 flexible:1 lucas:3 constrained:4 special:6 equal:2 once:1 sampling:10 identical:1 represents:2 foreground:3 minimized:1 torresani:2 randomly:1 murphy:1 replaced:1 argmax:2 occlusion:1 consisting:1 microsoft:1 ab:3 attempt:1 highly:1 gwc:1 alignment:1 mixture:1 yielding:1 integral:2 facial:1 unless:1 indexed:1 old:1 initialized:1 deformation:2 toone:1 column:1 modeling:1 rao:3 ar:13 cover:1 introducing:1 vertex:7 dependency:1 morph:5 combined:1 peak:5 international:2 cognitive:1 expert:43 syst:1 de:1 sec:1 coefficient:2 later:1 red:1 recover:2 simon:1 variance:8 who:1 efficiently:2 ensemble:1 weak:1 bayesian:2 carlo:3 j6:1 history:5 simultaneous:1 propagated:1 sampled:1 color:1 ut:100 infers:1 javier:2 hershey:3 formulation:1 implicit:1 hand:1 nonlinear:4 dge:1 effect:2 normalized:1 true:1 du1:1 andrieu:2 illustrated:1 conditionally:5 adjacent:2 numerator:1 nonrigid:1 demonstrate:1 motion:6 image:27 variational:1 rotation:3 tracked:2 volume:1 belong:1 expressing:1 refer:1 ai:1 smoothness:2 credibility:9 similarly:1 session:1 particle:4 dot:1 moving:1 similarity:1 morphable:2 base:3 posterior:2 recent:2 perspective:1 jolla:2 vt:35 accomplished:1 exploited:1 fortunately:1 swartz:1 signal:1 ii:4 multiple:1 infer:1 reduces:2 match:1 long:1 prediction:1 basic:2 denominator:2 vision:2 represent:2 ion:1 c1:1 proposal:5 background:20 want:2 addition:2 uninformative:4 whereas:1 leaving:1 rest:1 subject:2 tend:1 flow:40 call:1 near:1 yang:1 rendering:1 variety:1 independence:1 regarding:1 expression:4 stereo:1 hessian:1 cause:1 ignored:1 useful:1 statist:2 generate:4 http:1 exist:1 nsf:3 notice:2 estimated:1 track:3 blue:1 key:2 capital:1 changing:1 registration:1 v1:6 sum:3 year:1 run:3 inverse:2 letter:3 uncertainty:4 def:4 hi:1 refine:1 optic:19 constraint:3 precisely:1 u1:64 kumar:1 performing:1 rendered:5 department:1 tv:2 according:3 combination:5 across:2 evolves:1 invariant:1 gradually:1 taken:2 equation:5 remains:1 know:1 letting:1 endowed:1 appending:1 assumes:2 graphical:2 newton:3 unifying:1 exploit:1 ghahramani:1 move:1 already:1 diagonal:4 distance:1 collected:1 assuming:2 kalman:5 index:1 relationship:1 ratio:1 minimizing:2 difficult:1 stated:1 godsill:1 zt:4 twenty:1 av:5 situation:2 hinton:1 precise:1 head:1 y1:49 frame:16 ucsd:4 smoothed:1 community:2 drift:3 inferred:1 california:2 address:1 dth:3 encompasses:1 tb:3 green:1 video:10 belief:1 unrealistic:1 critical:1 representing:1 scheme:1 improve:1 lorenzo:1 created:2 prior:1 law:1 texel:9 generation:3 filtering:17 proportional:5 versus:1 geoffrey:1 foundation:1 degree:1 translation:3 row:1 supported:4 jth:1 institute:1 template:17 face:4 emerge:1 van:1 calculated:2 lett:1 stand:1 transition:4 rich:1 made:1 san:2 projected:1 ignore:1 doucet:3 b1:1 spatio:1 xi:17 iterative:1 kanade:3 ca:2 ignoring:1 diag:2 child:3 x1:2 cooper:3 precision:1 position:2 comprises:1 outdoor:2 bad:1 specific:2 xt:1 sequential:1 importance:6 texture:37 chen:2 simply:2 tracking:13 partially:1 scalar:1 corresponds:2 truth:1 determines:3 conditional:2 identity:1 formulated:1 goal:1 change:1 determined:1 wt:8 total:1 gauss:3 la:2 tendency:1 brand:1 rarely:1 formally:1 aaron:1 mark:2 alexander:1 evaluate:2 |
1,908 | 2,733 | Euclidean Embedding of Co-occurrence Data
2
Amir Globerson1 Gal Chechik2 Fernando Pereira3 Naftali Tishby1
1
School of computer Science and Engineering,
Interdisciplinary Center for Neural Computation
The Hebrew University Jerusalem, 91904, Israel
Computer Science Department, Stanford University, Stanford, CA 94305, USA
3
Department of Computer and Information Science,
University of Pennsylvania, Philadelphia, PA 19104, USA
Abstract
Embedding algorithms search for low dimensional structure in complex
data, but most algorithms only handle objects of a single type for which
pairwise distances are specified. This paper describes a method for embedding objects of different types, such as images and text, into a single
common Euclidean space based on their co-occurrence statistics. The
joint distributions are modeled as exponentials of Euclidean distances in
the low-dimensional embedding space, which links the problem to convex optimization over positive semidefinite matrices. The local structure of our embedding corresponds to the statistical correlations via random walks in the Euclidean space. We quantify the performance of our
method on two text datasets, and show that it consistently and significantly outperforms standard methods of statistical correspondence modeling, such as multidimensional scaling and correspondence analysis.
1
Introduction
Embeddings of objects in a low-dimensional space are an important tool in unsupervised
learning and in preprocessing data for supervised learning algorithms. They are especially
valuable for exploratory data analysis and visualization by providing easily interpretable
representations of the relationships among objects. Most current embedding techniques
build low dimensional mappings that preserve certain relationships among objects and differ in the relationships they choose to preserve, which range from pairwise distances in
multidimensional scaling (MDS) [4] to neighborhood structure in locally linear embedding
[12]. All these methods operate on objects of a single type endowed with a measure of
similarity or dissimilarity.
However, real-world data often involve objects of several very different types without a
natural measure of similarity. For example, typical web pages or scientific papers contain
varied data types such as text, diagrams, images, and equations. A measure of similarity
between words and pictures is difficult to define objectively. Defining a useful measure of
similarity is even difficult for some homogeneous data types, such as pictures or sounds,
where the physical properties (pitch and frequency in sounds, color and luminosity distribution in images) do not directly reflect the semantic properties we are interested in.
The current paper addresses this problem by creating embeddings from statistical associations. The idea is to find a Euclidean embedding in low dimension that represents the
empirical co-occurrence statistics of two variables. We focus on modeling the conditional
probability of one variable given the other, since in the data we analyze (documents and
words, authors and terms) there is a clear asymmetry which suggests a conditional model.
Joint models based on similar principles can be devised in a similar fashion, and may be
more appropriate for symmetric data. We name our method CODE for Co-Occurrence
Data Embedding.
Our cognitive notions are often built through statistical associations between different information sources. Here we assume that those associations can be represented in a lowdimensional space. For example, pictures which frequently appear with a given text are
expected to have some common, locally low-dimensional characteristic that allows them to
be mapped to adjacent points. We can thus rely on co-occurrences to embed different entity types, such as words and pictures, genes and expression arrays, into the same subspace.
Once this embedding is achieved it also naturally defines a measure of similarity between
entities of the same kind (such as images), induced by their other corresponding modality
(such as text), providing a meaningful similarity measure between images.
Embedding of heterogeneous objects is performed in statistics using correspondence analysis (CA), a variant of canonical correlation analysis for count data [8]. These are related
to Euclidean distances when the embeddings are constrained to be normalized. However, as
we show below, removing this constraint has great benefits for real data. Statistical embedding of same-type objects was recently studied by Hinton and Roweis [9]. Their approach
is similar to ours in that it assumes that distances induce probabilistic relations between
objects. However, we do not assume that distances are given in advance, but instead we
derive them from the empirical co-occurrence data. The Parametric Embedding method
[11], which also appears in the current proceedings, is formally similar to our method but
is used in the setting of supervised classification.
2
Problem Formulation
Let X and Y be two categorical variables with an empirical distribution p?(x, y). No additional assumptions are made on the values of X and Y or their relationships. We wish
to model the statistical dependence between X and Y through an intermediate Euclidean
~ : X ? Rd and ?
~ : Y ? Rd . These mappings should reflect the
space Rd and mappings ?
~
~
dependence between X and Y in the sense that the distance between each ?(x)
and ?(y)
determines their co-occurrence statistics.
We focus in this manuscript on modeling the conditional distribution p(y|x)1 , and define a
model which relates conditional probabilities to distances by
p?(y) ?d2x,y
p(y|x) =
e
?x ? X, ?y ? Y
(1)
Z(x)
Pd
2
~
~
where d2x,y ? |?(x)?
?(y)|
= k=1 (?k (x)??k (y))2 is the Euclidean distance between
~
~
?(x)
and ?(y)
and Z(x) is the partition function for each value of x. This partition funcP
2
tion equals Z(x) = y p?(y)e?dx,y and is thus the empirical mean of the exponentiated
distances from x (therefore Z(x) ? 1).
This model directly relates the ratio p(y|x)
to the distance between the embedded x and
p(y)
?
y. The ratio decays exponentially with the distance, thus for any x, a closer y will have
1
We have studied several other models of the joint rather than the conditional distribution. These
differ by the way the marginals are modeled and will be described elsewhere
Figure 1: Embedding of X, Y into the same d-dimensional space.
a higher interaction ratio. As a result of the fast decay, the closest objects dominate the
distribution. The model of Eq. 1 can also be described as the result of a random walk
in the low-dimensional space illustrated in Figure 1. When y has a uniform marginal, the
probability p(y|x) corresponds to a random walk from x to y, with transition probability
inversely related to distance.
~ ?
~ from an empirical distribution p?(x, y). It is natural
We now turn to the task of learning ?,
in this case to maximize the likelihood (up to constants depending on p?(y))
~ ?)
~ =?
max l(?,
~?
~
?,
X
x,y
p?(x, y)d2x,y ?
X
p?(x) log Z(x) ,
(2)
x
where p?(x, y) denotes the empirical distribution over X, Y . As in other cases, maximizing
the likelihood is also equivalent to minimizing the DKL between the empirical and the
model?s distributions. The likelihood is composed of two terms. The first is (minus) the
mean distance between x and y. This will be maximized when
P all distances are zero. This
trivial solution is avoided because of the regularization term x p?(x) log Z(x), which acts
to increase distances between x and y points. The next section discusses the relation of this
target function to that of Canonical Correlation Analysis [10].
To characterize the maxima of the likelihood we differentiate it with respect to the embed~
~
dings of individual objects ?(x),
?(y),
and obtain the following gradients
~ ?)
~
?l(?,
~
? ?(x)
=
~
~
2?
p(x) h?(y)i
? h?(y)i
p(y|x)
?
p(y|x)
~ ?)
~
?l(?,
~
? ?(y)
=
~
~
~
~
2p(y) ?(y)
? h?(x)i
p(y) ?(y)
? h?(x)i
,
p(x|y) ? 2?
p(x|y)
?
where p(y) =
P
x
(3)
p(y|x)?
p(x).
~
~
~
gradient yields h?(y)i
.
Equating these gradients to zero, the ?(x)
p(y|x) = h?(y)ip(y|x)
?
This characterization is similar to the one seen in maximum entropy learning. Since p(y|x)
will have significant values for Y values such that ?(y) is close to ?(x), this condition
implies that the expected location of a neighbor of ?(x) is the same under the empirical
and model distributions.
~ ?
~ for a given embedding dimension d, we used a conjugate gradient
To find the optimal ?,
ascent algorithm with random restarts. In section 4 we describe a different approach to this
optimization problem.
3
Relation to Other Methods
Embedding the rows and columns of a contingency table into a low dimensional Euclidean
space is related to statistical methods for the analysis of heterogeneous data. Fisher [6]
described a method for mapping X and Y into ?(x), ?(y) such that the correlation coefficient between ?(x), ?(y) is maximized. His method is in fact the discrete analogue of the
more widely known Canonical correlation analysis (CCA) [10]. Another closely related
method is Correspondence analysis [8], which uses a different normalization scheme, and
aims to model ?2 distances between rows and columns of p?(x, y).
The goal of all the above methods is to maximize the correlation coefficient between the
embeddings of X and Y . We now discuss their relation to our distance based method.
First, note that the correlation coefficient is invariant under affine transformations and we
can thus focus on centered solutions with a unity covariance matrix (h?(x)i = 0 and
COV (?(x)) = COV (?(y)) = I) solutions. In this case, the correlation coefficient is
given by the following expression (we focus on d = 1 for simplicity)
?(?(x), ?(y)) =
X
p?(x, y)?(x)?(y) = ?
x,y
1X
p?(x, y)d2x,y + 1 .
2 x,y
(4)
Maximizing the correlation is therefore equivalent to minimizing the mean distance across
all pairs. This clarifies the relation between CCA and our method: Both methods aim
to minimize the average distance between X and Y embeddings. However, CCA forces
both embeddings to be centered and with a unity covariance matrix, whereas our method
introduces a global regularization term related to the partition function.
Our method is additionally related to exponential models of contingency tables, where the
counts are approximated by a normalized exponent of a low rank matrix [7]. The current
approach can be understood as a constrained version of these models where the expression
in the exponent is constrained to have a geometric interpretation.
A well-known geometric oriented embedding method is multidimensional scaling
(MDS) [4], whose standard version applies to same-type objects with predefined distances.
MDS embedding of heterogeneous entities was studied in the context of modeling ranking
data (see [4] section 7.3). These models, however, focus on specific properties of ordinal data and therefore result in optimization principles and algorithms different from our
probabilistic interpretation.
4
Semidefinite Representation
~ ?
~ may be found using unconstrained optimization techniques.
The optimal embeddings ?,
However, the Euclidean distances used in the embedding space also allow us to reformulate the problem as constrained convex optimization over the cone of positive semidefinite
(PSD) matrices [14].
We start by showing that for embeddings with dimension d = |X| + |Y |, maximizing (2) is
equivalent to minimizing a certain convex non-linear function over PSD matrices. Consider
~ and ?.
~ The matrix G ? AT A
the matrix A whose columns are all the embedded vectors ?
is the Gram matrix of the dot products between embedding vectors. It is thus a symmetric
PSD matrix of rank ? d. The converse is also true: any PSD matrix of rank ? d can be
factorized as AT A, where A is an embedding matrix of dimension d. The distance between
two columns in A is linearly related to the Gram matrix via d2ij = gii + gjj ? 2gij .
Since the likelihood function depends only on the distances between points in X and in Y ,
we can write the optimization problem in (2) as
X
X
X
2
min
p?(x) log
p?(y)e?dxy +
p?(x, y)d2xy
G
x
y
Subject to G 0 ,
(5)
x,y
rank(G) ? d,
d2xy = gxx + gyy ? 2gxy
where gxy denotes the element in G corresponding to specific values of x, y.
Thus, our problem is equivalent to optimizing a nonlinear objective over the set of PSD
matrices of a constrained rank. The minimized
function is convex, since it is the sum of a
P
linear function of G and functions log exp of an affine expression in G, which are also
convex (see Geometric Programming section in [2]). Moreover, when G has full rank, the
set of constraints is also convex. We conclude that when the embedding dimension is of
size d = |X| + |Y | the optimization problem of Eq. (5) is convex. Thus there are no local
minima, and solutions can be found efficiently.
The PSD formulation allows us to add non-trivial constraints.
P Consider, for example, conp(x) = p?(y). To instraining the p(y) marginal to its empirical values, i.e.
x p(y|x)?
troduce this as a convex constraint we take two steps. First, we note that we can relax
the constraint that distributions normalize to one, and require that they normalize to less
than one. This is achieved by replacing log Z(x) with a free variable a(x) and writing the
problem as follows (we omit the dependence of d2xy on G for brevity)
X
X
min
p?(x)a(x) +
p?(x, y)d2xy
(6)
G
x
Subject to G 0 ,
x,y
rank(G) ? d,
log
X
2
p
?(y)e?dxy ?a(x) ? 0
?x
y
It can be shown that the optimum of 6 will be obtained forP
solutions normalized to one, and
it thus coincides with the optimum of 5. The constraint x p(y|x)?
p(x) = p?(y) can now
P
2
be relaxed to the inequality x p?(y)?
p(x)e?dxy ?a(x) ? p?(y), which defines a convex set.
Again, the optimum will be obtained when the constraint is satisfied with equality.
Embedding into a low dimension requires constraining the rank, but this is difficult since
the problem is no longer convex in the general case. One approach to obtaining low rank
solutions is to optimize over a full rank G and then project it into a lower dimension via
spectral decomposition as in [14] or classical MDS. However, in the current problem, this
was found to be ineffective. Instead, we penalize high-rank solutions by adding the trace
of G [5] weighted by a positive factor, ?, to the objective function in (5). Small values
of Tr(G) are expected to correspond to sparse eigenvalue sets and thus penalize high rank
solutions. This approach was tested on subsets of the databases described in section 5
and yielded similar results to those of the gradient based algorithm. We believe that PSD
algorithms may turn out to be more efficient in cases where relatively high dimensional
embeddings are sought. Furthermore, under the PSD formulation it is easy to introduce
additional constraints, for example on distances between subsets of points (as in [14]), and
on marginals of the distribution.
5
Applications
We tested our approach on a variety of applications. Here we present embedding of words
and documents and authors and documents. To provide quantitative assessment of the
performance of our method, that goes beyond visual inspection, we apply it to problems
where some underlying structures are known in advance. The known structures are only
used for performance measurement and not during learning.
(a)
(b)
bayesian
AA
NS
BI
VS
VM
VB
LT
CS
IM
AP
SP
CN
ET
support convergence
polynomial
marginal
gamma
bayes machines
papers
variational
regularization
regression
bound
bootstrap
loss
nips
risk
bounds
(c)
(d)
eye
head
dominance
rat
classifiers
stimuli
cells
movements
stimulus
motion
receptor
movement
receptive
eeg
perception
spatial
cell
activity
channels
recorded
biol
frequency
scene
response
position temporal
formation
detector
agent
agents actions
policies
rewards
game
policy
documents
mdp
dirichlet
Figure 2: CODE Embedding of 2483 documents and 2000 words from the NIPS database (the 2000
most frequent words, excluding the first 100, were used). The left panel shows document embeddings
for NIPS 15-17, with colors to indicate the document topic. Other panels show embedded words
and documents for the areas specified by rectangles. Figure (b) shows the border region between
algorithms and architecture (AA) and learning theory (LT) (bottom rectangle in (a)). Figure (c)
shows the border region between neuroscience (NS) and biological vision (VB) (upper rectangle in
(a)). Figure (d) shows mainly control and navigation (CN) documents (left rectangle in (a)).
(a)
(b)
dual machines
loss
svms
proof
lemma
norm rational
regularization
ranking kernels
hyperplane
generalisation
proposition
regularized
margin
vapnik
smola
corollary
Shawe?Taylor
Scholkopf
shawe
sv
Vapnik
pac
Opper
Bartlett
Meir lambda
(c)
(d)
agent
agents
game
planning
Sutton
singh
Barto
plan
actions
games
Thrun
Singh
Dietterich
Tesauro
player
rewards
mdp
bellman
policy
mdps
policies
vertex
Gordon
Moore
Mel
retinal
Pouget
inhibition
iiii
Li
cortex
retina
Baird
neurosci auditory Bower
oscillatory
pyramidal
cortical
inhibitory
conductance
ocular
msec
dendritic
Koch
Figure 3: CODE Embedding of 2000 words and 250 authors from the NIPS database (the 250
authors with highest word counts were chosen; words were selected as in Figure 2). Left panel shows
embeddings for authors (red crosses) and words (blue dots). Other panels show embedded authors
(only first 100 shown) and words for the areas specified by rectangles. They can be seen to correspond
to learning theory, control and neuroscience (from left to right).
5.1
NIPS Database
Embedding algorithms may be used to study the structure of document databases. Here we
used the NIPS 0-12 database supplied by Roweis 2 , and augmented it with data from NIPS
volumes 13-17 3 . The last three volumes also contain an indicator of the document?s topic
(AA for algorithms and architecture, LT for learning theory, NS for neuroscience etc.).
We first used CODE to embed documents and words into R2 . The results are shown in
Figure 2. It can be seen that documents with similar topics are mapped next to each other
(e.g. AA near LT and NS near Biological Vision). Furthermore, words characterize the
topics of their neighboring documents.
Next, we used the data to generate an authors-words matrix (as in the Roweis database). We
could now embed authors and words into R2 , by using CODE to model p(word|author).
The results are shown in Figure 3. It can be seen that authors are indeed mapped next to
terms relevant to their work, and that authors dealing with similar domains are also mapped
together. This illustrates how co-occurrence of words and authors may be used to induce a
metric on authors alone.
2
3
See http://www.cs.toronto.edu/?roweis/data.html
Data available at http://robotics.stanford.edu/?gal/
(a)
(b)
(c)
1
1
doc?word measure
doc?doc measure
0.9
purity
CODE
CA
1
0.8
0.7
CODE
IsoMap
CA
MDS
SVD
0.6
0.5
1
10
100
N nearest neighbors
1000
0
0
CODE
IsoM
CA
MDS
SVD
Newsgroup Sets
Figure 4: (a) Document purity measure for the embedding of newsgroups crypt, electronics and
med, as a function of neighborhood size. (b) The doc ? doc measure averaged over 7 newsgroup sets.
For each set, the maximum performance was normalized to one. Embedding dimension is 2. Sets are
atheism, graphics, crypt; ms-windows, graphics; ibm.pc.hw, ms-windows; crypt, electronics; crypt,
electronics, med; crypt, electronics, med, space; politics.mideast, politics.misc. (c) The word ? doc
measure for CODE and CA algorithms, for 7 newsgroup sets. Embedding dimension is 2.
5.2
Information Retrieval
To obtain a more quantitative estimate of performance, we applied CODE to the 20 newsgroups corpus, preprocessed as described in [3]. This corpus consists of 20 groups, each
with 1000 documents. We first removed the 100 most frequent words, and then selected the
next k most frequent words for different values of k (see below). The resulting words and
documents were embedded with CODE, Correspondence Analysis (CA), SVD, IsoMap
and classical MDS 4 . CODE was used to model the distribution of words given documents
p(w|d). All methods were tested under several normalization schemes, including document
sum normalization and TFIDF. Results were consistent over all normalization schemes.
An embedding of words and documents is expected to map documents with similar semantics together, and to map words close to documents which are related to the meaning of the
word. We next test how our embeddings performs with respect to these requirements. To
represent the meaning of a document we use its corresponding newsgroup. Note that this
information is used only for evaluation and not in constructing the embedding itself.
To measure how well similar documents are mapped together we define a purity measure,
which we denote doc ? doc. For each embedded document, we measure the fraction of its
neighbors that are from the same newsgroup. This is repeated for all neighborhood sizes,
and averaged over all sizes and documents.
To measure how documents are related to their neighboring words, we use a measured denoted by word?doc. For each document d we look at its n nearest words and calculate their
probability under the document?s newsgroup, normalized by their prior. This is repeated
for neighborhood sizes smaller than 100 and averaged over documents . The word ? doc
measure was only compared with CA, since this is the only method that provides joint
embeddings.
Figure 4 compares the performance of CODE with that of the other methods with respect
to the doc ? doc and word ? doc measures. CODE can be seen to outperform all other
methods on both measures.
4
CA embedding followed the standard procedure described in [8]. IsoMap implementation was
provided by the IsoMap authors [13]. We tested both an SVD over the count matrix and SVD over
log of the count plus one, only the latter is described here because it was better than the former. For
MDS, the distances between objects were calculated as the dot product between their count vectors
(we also tested Euclidean distances)
6
Discussion
We presented a method for embedding objects of different types into the same low dimension Euclidean space. This embedding can be used to reveal low dimensional structures
when distance measures between objects are unknown. Furthermore, the embedding induces a meaningful metric also between objects of the same type, which could be used, for
example, to embed images based on accompanying text, and derive the semantic distance
between images.
Co-occurrence embedding should not be restricted to pairs of variables, but can be extended
to multivariate joint distributions, when these are available. It can also be augmented to use
distances between same-type objects when these are known.
An important question in embedding objects is whether the embedding is unique. In other
words, can there be two non isometric embeddings which are obtained at the optimum
of the problem. This question is related to the rigidity and uniqueness of embeddings on
graphs, specifically complete bipartite graphs in our case. A theorem of Bolker and Roth
[1] asserts that for such graphs with at least 5 vertices on each side, embeddings are rigid,
i.e. they cannot be continuously transformed. This suggests that the CODE embeddings
for |X|, |Y | ? 5 are unique (at least locally) for d ? 3.
We focused here on geometric models for conditional distributions. While in some cases,
such a modeling choice is more natural in others joint models may be more appropriate. In
2
this context it will be interesting to consider models of the form p(x, y) ? p(x)p(y)e ?dx,y
where p(x), p(y) are the marginals of p(x, y). Maximum likelihood in these models is a
non-trivial constrained optimization problem, and may be approached using the semidefinite representation outlined here.
References
[1] E.D. Bolker. and B. Roth. When is a bipartite graph a rigid framework?
90:27?44, 1980.
Pacific J. Math.,
[2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge Univ. Press, 2004.
[3] G. Chechik and N. Tishby. Extracting relevant structures with side information. In S. Becker,
S. Thrun, and K. Obermayer, editors, NIPS 15, 2002.
[4] T. Cox and M. Cox. Multidimensional Scaling. Chapman and Hall, London, 1984.
[5] M. Fazel, H. Hindi, and S. P. Boyd. A rank minimization heuristic with application to minimum
order system approximation. In Proc. of the American Control Conference, 2001.
[6] R.A. Fisher. The percision of discriminant functions. Ann. Eugen. Lond., 10:422?429, 1940.
[7] A. Globerson and N. Tishby. Sufficient dimensionality reduction. Journal of Machine Learning
Research, 3:1307?1331, 2003.
[8] M.J. Greenacre. Theory and applications of correspondence analysis. Academic Press, 1984.
[9] G. Hinton and S.T. Roweis. Stochastic neighbor embedding. In NIPS 15, 2002.
[10] H. Hotelling. The most predictable criterion. Journal of Educational Psych., 26:139?142, 1935.
[11] T. Iwata, K. Saito, N. Ueda, S. Stromsten, T. Griffiths, and J. Tenenbaum. Parametric embedding
for class visualization. In NIPS 18, 2004.
[12] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding.
Science, 290:2323?2326, 2000.
[13] J.B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290:2319?2323, 2000.
[14] K. Q. Weinberger and L. K. Saul. Unsupervised learning of image manifolds by semidefinite
programming. In CVPR, 2004.
| 2733 |@word cox:2 version:2 polynomial:1 norm:1 covariance:2 decomposition:1 minus:1 tr:1 reduction:3 electronics:4 document:30 ours:1 outperforms:1 current:5 dx:2 partition:3 interpretable:1 v:1 alone:1 selected:2 amir:1 inspection:1 characterization:1 provides:1 math:1 location:1 toronto:1 scholkopf:1 consists:1 introduce:1 pairwise:2 indeed:1 expected:4 frequently:1 planning:1 bolker:2 bellman:1 window:2 project:1 provided:1 moreover:1 underlying:1 panel:4 factorized:1 israel:1 kind:1 psych:1 gal:2 transformation:1 temporal:1 quantitative:2 multidimensional:4 act:1 classifier:1 control:3 converse:1 omit:1 appear:1 positive:3 engineering:1 local:2 understood:1 receptor:1 sutton:1 ap:1 plus:1 studied:3 equating:1 suggests:2 co:9 range:1 bi:1 averaged:3 fazel:1 unique:2 globerson:1 bootstrap:1 procedure:1 saito:1 area:2 empirical:9 significantly:1 boyd:2 chechik:1 word:33 induce:2 griffith:1 cannot:1 close:2 context:2 risk:1 writing:1 optimize:1 equivalent:4 www:1 map:2 center:1 maximizing:3 roth:2 jerusalem:1 go:1 educational:1 convex:11 focused:1 simplicity:1 pouget:1 array:1 dominate:1 vandenberghe:1 his:1 biol:1 embedding:42 handle:1 exploratory:1 notion:1 target:1 programming:2 homogeneous:1 us:1 pa:1 element:1 approximated:1 database:7 bottom:1 ding:1 calculate:1 region:2 d2ij:1 movement:2 highest:1 removed:1 valuable:1 pd:1 predictable:1 reward:2 singh:2 bipartite:2 easily:1 joint:6 represented:1 univ:1 fast:1 describe:1 london:1 approached:1 formation:1 neighborhood:4 whose:2 heuristic:1 stanford:3 widely:1 cvpr:1 relax:1 objectively:1 statistic:4 cov:2 itself:1 ip:1 differentiate:1 eigenvalue:1 lowdimensional:1 interaction:1 product:2 frequent:3 neighboring:2 relevant:2 roweis:6 asserts:1 normalize:2 convergence:1 asymmetry:1 optimum:4 requirement:1 object:19 derive:2 depending:1 measured:1 nearest:2 school:1 eq:2 c:2 implies:1 indicate:1 quantify:1 differ:2 closely:1 stochastic:1 centered:2 require:1 proposition:1 biological:2 dendritic:1 tfidf:1 im:1 accompanying:1 koch:1 hall:1 exp:1 great:1 mapping:4 sought:1 uniqueness:1 proc:1 troduce:1 tool:1 weighted:1 minimization:1 aim:2 rather:1 barto:1 corollary:1 focus:5 consistently:1 rank:13 likelihood:6 mainly:1 sense:1 rigid:2 relation:5 transformed:1 interested:1 semantics:1 among:2 classification:1 dual:1 html:1 exponent:2 denoted:1 plan:1 constrained:6 spatial:1 marginal:3 equal:1 once:1 chapman:1 represents:1 look:1 unsupervised:2 minimized:1 others:1 stimulus:2 gordon:1 retina:1 oriented:1 composed:1 preserve:2 gamma:1 individual:1 psd:8 conductance:1 evaluation:1 introduces:1 navigation:1 semidefinite:5 pc:1 predefined:1 closer:1 euclidean:12 taylor:1 walk:3 forp:1 column:4 modeling:5 vertex:2 subset:2 uniform:1 graphic:2 tishby:2 characterize:2 sv:1 interdisciplinary:1 probabilistic:2 vm:1 together:3 continuously:1 again:1 reflect:2 satisfied:1 recorded:1 choose:1 lambda:1 cognitive:1 creating:1 american:1 li:1 de:1 retinal:1 coefficient:4 baird:1 ranking:2 depends:1 performed:1 tion:1 analyze:1 red:1 start:1 bayes:1 minimize:1 characteristic:1 efficiently:1 maximized:2 yield:1 clarifies:1 correspond:2 bayesian:1 detector:1 oscillatory:1 crypt:5 frequency:2 ocular:1 naturally:1 proof:1 rational:1 auditory:1 color:2 dimensionality:3 appears:1 manuscript:1 higher:1 supervised:2 restarts:1 isometric:1 response:1 formulation:3 furthermore:3 smola:1 correlation:9 langford:1 web:1 gjj:1 replacing:1 nonlinear:3 assessment:1 defines:2 reveal:1 scientific:1 mdp:2 believe:1 usa:2 name:1 dietterich:1 contain:2 normalized:5 true:1 isomap:4 former:1 regularization:4 equality:1 symmetric:2 moore:1 semantic:2 illustrated:1 misc:1 adjacent:1 during:1 game:3 naftali:1 mel:1 coincides:1 rat:1 funcp:1 m:2 criterion:1 complete:1 performs:1 motion:1 silva:1 image:8 variational:1 meaning:2 recently:1 common:2 physical:1 exponentially:1 volume:2 association:3 interpretation:2 marginals:3 significant:1 measurement:1 cambridge:1 rd:3 unconstrained:1 outlined:1 shawe:2 dot:3 similarity:6 longer:1 inhibition:1 cortex:1 add:1 etc:1 closest:1 multivariate:1 optimizing:1 tesauro:1 certain:2 inequality:1 seen:5 minimum:2 additional:2 relaxed:1 purity:3 fernando:1 maximize:2 relates:2 full:2 sound:2 academic:1 cross:1 retrieval:1 devised:1 dkl:1 pitch:1 variant:1 regression:1 heterogeneous:3 vision:2 metric:2 normalization:4 kernel:1 represent:1 achieved:2 cell:2 penalize:2 robotics:1 whereas:1 iiii:1 diagram:1 pyramidal:1 source:1 modality:1 operate:1 ascent:1 ineffective:1 induced:1 subject:2 gii:1 med:3 extracting:1 near:2 intermediate:1 constraining:1 embeddings:17 easy:1 variety:1 newsgroups:2 pennsylvania:1 architecture:2 idea:1 cn:2 politics:2 whether:1 expression:4 bartlett:1 becker:1 action:2 useful:1 clear:1 involve:1 locally:4 tenenbaum:2 induces:1 svms:1 stromsten:1 generate:1 http:2 meir:1 supplied:1 outperform:1 canonical:3 inhibitory:1 neuroscience:3 blue:1 discrete:1 write:1 dominance:1 group:1 preprocessed:1 rectangle:5 graph:4 fraction:1 cone:1 sum:2 ueda:1 doc:13 scaling:4 vb:2 cca:3 bound:2 followed:1 correspondence:6 yielded:1 activity:1 constraint:8 scene:1 min:2 lond:1 relatively:1 department:2 pacific:1 conjugate:1 describes:1 across:1 smaller:1 unity:2 invariant:1 restricted:1 equation:1 visualization:2 turn:2 count:6 discus:2 ordinal:1 available:2 endowed:1 apply:1 appropriate:2 spectral:1 occurrence:9 hotelling:1 weinberger:1 assumes:1 denotes:2 dirichlet:1 especially:1 build:1 classical:2 objective:2 question:2 parametric:2 receptive:1 dependence:3 md:8 obermayer:1 gradient:5 subspace:1 distance:30 link:1 mapped:5 thrun:2 entity:3 topic:4 manifold:1 discriminant:1 trivial:3 code:15 modeled:2 relationship:4 reformulate:1 providing:2 ratio:3 hebrew:1 minimizing:3 difficult:3 trace:1 implementation:1 policy:4 unknown:1 upper:1 datasets:1 luminosity:1 defining:1 hinton:2 excluding:1 head:1 extended:1 varied:1 pair:2 specified:3 nip:10 address:1 beyond:1 below:2 perception:1 built:1 max:1 including:1 analogue:1 natural:3 rely:1 force:1 regularized:1 indicator:1 hindi:1 greenacre:1 scheme:3 mdps:1 inversely:1 eye:1 picture:4 categorical:1 philadelphia:1 text:6 prior:1 geometric:5 embedded:6 loss:2 interesting:1 conp:1 contingency:2 agent:4 affine:2 sufficient:1 consistent:1 principle:2 editor:1 ibm:1 row:2 elsewhere:1 last:1 free:1 side:2 exponentiated:1 allow:1 neighbor:4 saul:2 d2x:4 sparse:1 benefit:1 dimension:10 opper:1 world:1 transition:1 gram:2 cortical:1 calculated:1 author:14 made:1 preprocessing:1 avoided:1 gxy:2 gene:1 dealing:1 global:2 corpus:2 conclude:1 search:1 table:2 additionally:1 channel:1 ca:9 obtaining:1 eeg:1 complex:1 constructing:1 domain:1 sp:1 linearly:1 neurosci:1 border:2 atheism:1 repeated:2 augmented:2 fashion:1 n:4 position:1 wish:1 msec:1 exponential:2 bower:1 mideast:1 hw:1 removing:1 theorem:1 embed:5 specific:2 showing:1 pac:1 r2:2 decay:2 vapnik:2 adding:1 dissimilarity:1 illustrates:1 margin:1 entropy:1 lt:4 visual:1 applies:1 aa:4 corresponds:2 iwata:1 determines:1 conditional:6 goal:1 ann:1 fisher:2 typical:1 generalisation:1 specifically:1 hyperplane:1 lemma:1 gij:1 svd:5 player:1 meaningful:2 newsgroup:6 formally:1 support:1 latter:1 brevity:1 dxy:3 tested:5 rigidity:1 |
1,909 | 2,734 | Exploration-Exploitation Tradeoffs for
Experts Algorithms in Reactive Environments
Daniela Pucci de Farias
Department of Mechanical Engineering
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Nimrod Megiddo
IBM Almaden Research Center
650 Harry Road, K53-B2
San Jose, CA 95120
[email protected]
Abstract
A reactive environment is one that responds to the actions of an agent rather than
evolving obliviously. In reactive environments, experts algorithms must balance
exploration and exploitation of experts more carefully than in oblivious ones. In
addition, a more subtle definition of a learnable value of an expert is required. A
general exploration-exploitation experts method is presented along with a proper
definition of value. The method is shown to asymptotically perform as well as
the best available expert. Several variants are analyzed from the viewpoint of the
exploration-exploitation tradeoff, including explore-then-exploit, polynomially
vanishing exploration, constant-frequency exploration, and constant-size exploration phases. Complexity and performance bounds are proven.
1
Introduction
Real-world environments require agents to choose actions sequentially. For example, a
driver has to choose everyday a route from one point to another, based on past experience
and perhaps some current information. In another example, an airline company has to set
prices dynamically, also based on past experience and current information. One important
difference between these two examples is that the effect of the driver?s decision on the future traffic patterns is negligible, whereas prices set by one airline can affect future market
prices significantly. In this sense the decisions of the airlines are made in a reactive environment, whereas the driver performs in a non-reactive one. For this reason, the driver?s
problem is essentially a problem of prediction while the airline?s problem has an additional
element of control.
In the decision problems we consider, an agent has to repeatedly choose currently feasible actions. The agent then observes a reward, which depends both on the chosen action
and the current state of the environment. The state of the environment may depend both
on the agent?s past choices and on choices made by the environment independent of the
agent?s current choice. There are various known approaches to sequential decision making
under uncertainty. In this paper we focus on the so-called experts algorithm approach. An
?expert? (or ?oracle?) is simply a particular strategy recommending actions based on the
past history of the process. An experts algorithm is a method that combines the recommendations of several given ?experts? (or ?oracles?) into another strategy of choosing actions
(e.g., [4, 1, 3]).
Many learning algorithms can be interpreted as ?exploration-exploitation? methods.
Roughly speaking, such algorithms blend choices of exploration, aimed at acquiring knowledge, and exploitation that capitalizes on gained knowledge to accumulate rewards. In particular, some experts algorithms can be interpreted as blending the testing of all experts
and following those experts that observed to be more rewarding. Our previous paper [2]
presented a specific exploration-exploitation experts algorithm. The reader is referred to
[2] for more definitions, examples and discussion. That algorithm was designed especially
for learning in reactive environments. The difference between our algorithm and previous
experts algorithms is that our algorithm tests each expert for multiple consecutive stages of
the decision process, in order to acquire knowledge about how the environment reacts to
the expert. We pointed out that the ?Minimum Regret? criterion often used for evaluating
experts algorithms was not suitable for reactive environments, since it ignored the possibility that different experts may induce different states of the environment. The previous
paper, however, did not attempt to optimize the exploration-exploitation tradeoff. It rather
focused on one particular possibility, which was shown to perform in the long-run as well
as the best expert.
In this paper, we present a more general exploration-exploitation experts method and provide results about the convergence of several of its variants. We develop performance guarantees showing that the method achieves average payoff comparable to that achieved by the
best expert. We characterize convergence rates that hold both in expected value and with
high probability. We also introduce a definition for the long-term value of an expert, which
captures the reactions of the environment to the expert?s actions, as well as the fact that any
learning algorithm commits mistakes. Finally, we characterize how fast the method learns
the value of each expert. An important aspect of our results is that they provide an explicit
characterization of the tradeoff between exploration and exploitation.
The paper is organized as follows. The method is described in section 2. Convergence
rates based on actual expert performance are presented in section 3. In section 4, we define the experts? long-rum values, whereas in section 5 we address the question of how
fast the method learns the values of the experts. Finally, in section 6 we analyze various
explorations schemes. These results assume that the number of stages.
2
The Exploration-Exploitation Method
The problem we consider in this paper can be described as follows. At times t = 1, 2, . . .,
an agent has to choose actions at ? A. At the same times the environment also ?chooses?
bt ? B, and then the agent receives a reward R(at , bt ). The choices of the environment
may depend on various factors, including the past choices of the agent.
As in the particular algorithm of [2], the general method follows chosen experts for multiple
stages rather than picking a different expert each time. A maximal set of consecutive stages
during which the same expert is followed is called a phase. Phase numbers are denoted by
i, The number of phases during which expert e has been followed is denoted by N e , the
total number of stages during which expert e has been followed is denoted by S e , and the
average payoff from phases in which expert e has been followed is denoted by M e . The
general method is stated as follows.
? Exploration. An exploration phase consists of picking a random expert e (i.e.,
from the uniform distribution over {1, . . . , r}), and following e?s recommendations for a certain number of stages depending on the variant of the method.
? Exploitation. An exploitation phase consists of picking an expert e with maximum Me , breaking ties at random, and following e?s recommendations for a certain number of stages depending on the variant of the method.
A general Exploration-Exploitation Experts Method:
1. Initialize Me = Ne = Se = 0 (e = 1, . . . , r) and i = 1.
2. With probability pi , perform an exploration phase, and with probability 1 ? pi
perform an exploitation phase; denote by e the expert chosen to be followed and
by n the number of stages chosen for the current phase.
3. Follow expert e?s instructions for the next n stages. Increment Ne = Ne + 1 and
? the average payoff accumulated during the
update Se = Se + n. Denote by R
current phase of n stages and update
n ?
(R ? M e ) .
Me = M e +
Se
4. Increment i = i + 1 and go to step 2.
We denote stage numbers by s and phase numbers by i. We denote by M1 (i), . . . , Mr (i)
the values of the registers M1 , . . . , Mr , respectively, at the end of phase i. Similarly, we
denote by N1 (i), . . . , Nr (i) the values of the registers N1 , . . . , Nr , respectively, and by
S1 (i), . . . , Sr (i) the values of the registers S1 , . . . , Sr , respectively, at the end of phase i.
In sections 3 and 5, we present performance bounds for the EEE method when the length
of the phase is n = Ne . In section 6.4 we consider the case where n = L for a fixed L.
Due to space limitations, proofs are omitted and can be found in the online appendix CITE.
3
Bounds Based on Actual Expert Performance
The original variant of the EEE method [2] used pi = 1/i and n = Ne . The following was
proven:
?
?
Pr lim inf M (s) ? max lim inf Me (i) = 1 .
(1)
s??
e
i??
In words, the algorithm achieves asymptotically an average reward that is as large as that
of the best expert. In this section we generalize this result. We present several bounds
characterizing the relationship between M (i) and Me (i). These bounds are valuable in
several ways. First, they provide worst-case guarantees about the performance of the EEE
method. Second, they provide a starting point for analyzing the behavior of the method
under various assumptions about the environment. Third, they quantify the relationship
between amount of exploration, represented by the exploration probabilities p i , and the
loss of performance. Together with the analysis of Section 5, which characterizes how fast
the EEE method learns the value of each expert, the bounds derived here describe explicitly
the tradeoff between exploration and exploitation.
We
P denote by Zej the event ?phase j performs exploration with expert e,? and let Zj =
e Zej and
i
i
h X
i
X
pi .
Z?i0 i = E
Zj =
j=i0 +1
j=i0 +1
Note that Z?i0 i denotes the expected number of exploration phases between phases i0 + 1
and i.
The first theorem establishes that, with high probability, after a finite number of iterations,
the EEE method performs comparably to the best expert. The performance of each expert
is defined as the smallest average reward achieved by that expert in the interval between an
(arbitrary) phase i0 and the current phase i. It can be shown via a counterexample that this
bound cannot be extended into a (somewhat more natural) comparison between the average
reward of the EEE method and the average reward of each expert at iteration i.
?
Theorem 3.1. For all i0 , i and ? such that Z?i0 i ? i?2 /(4 ru2 ) ? i0 ?/(4u),
(
?
?2 )
?
?
i0 ?
1
i?2
?
?
Pr M (i) ? max min Me (j) ? 2? ? exp ?
? Z?i0 i
.
e i0 +1?j?i
2i 4 ru2
4u
The following theorem characterizes the expected difference between the average reward
of EEE method and that of the best expert.
Theorem 3.2. For all i0 ? i and ? > 0,
h
E M (i) ? max
e
?
?2 ?
i
3u + 2?
Z i0 i
i0 (i0 + 1)
? 2u
.
Me (i) ? ?? ? u
i0 +1?j?i
i (i/r + 1)
?
i
min
It follows from Theorem 3.1 that, under certain assumptions on the exploration probabilities, the EEE method performs asymptotically at least as well as the expert that did best.
Corollary 3.1 generalizes the asymptotic result established in [2].
Corollary 3.1. If limi?? Z?0i /i = 0, then
?
?
Pr lim inf M (s) ? max lim inf Me (i) = 1 .
(2)
s??
e
i??
Note that here the average reward obtained by the EEE method is compared with the reward
actually achieved by each expert during the same run of the method. It does not have any
implication on the behavior of Me (i), which is analyzed in the next section.
4
The Value of an Expert
In this section we analyze the behavior of the average reward Me (i) that is computed by
the EEE method for each expert e. This average reward is also used by the method to intuitively estimate the value of expert e. So, the question is whether the EEE method is indeed
capable of learning the value of the best experts. Thus, we first discuss what is a ?learnable
value? of an expert. This concept is not trivial especially when the environment is reactive.
The obvious definition of a value as the expected average reward the expert could achieve,
if followed exclusively, does not work. The previous paper presented an example (see Section 4 in [2]) of a repeated Matching Pennies game, which proved this impossibility. That
example shows that an algorithm that attempts to learn what an expert would achieve, if
played exclusively, cannot avoid committing fatal ?mistakes.? In certain environments, every non-trivial learning algorithm must commit such fatal mistakes. Hence, such mistakes
cannot, in general, be considered necessarily a weakness of the algorithm. A more realistic
concept of value, relative to a certain environment policy ?, is defined as follows, using a
real parameter ? .
Definition 4.1.
(i) Achievable ? -Value. A real ? is called an achievable ? -value for expert e against
an environment policy ?, if there exists a constant c? ? 0 such that, for every
stage s0 , every possible history hs0 at stage s0 and any number of stages s,
i
h P
c?
s0 +s
R(a
(s),
b(s))
:
a
(s)
?
?
(h
),
b(s)
?
?(h
)
??? ?.
E 1s s=s
e
e
e s
s
0 +1
s
(ii) ? -Value. The ? -value ??e of expert e with respect to ? is the largest achievable
? -value of e:
??e = sup{ ? : ? is an achievable ? -value} .
(3)
In words, a value ? is achievable by expert e if the expert can secure an expected average
reward during the s stages, between stage s0 and stage s0 + s, which is asymptotically at
least as much as ?, regardless of the history of the play prior to stage s0 . In [2], we introduced the notion of flexibility as a way of reasoning about the value of an expert and when
it can be learned. The ? -value can be viewed as a relaxation of the previous assumptions
and hence the results here strengthen those of [2]. We note, however, that flexibility does
hold when the environment reacts with bounded memory or as a finite automaton.
5
Bounds Based on Expected Expert Performance
In this section we characterize how fast the EEE method learns the ? -value of each expert.
We can derive the rate at which the average reward achieved by the EEE method approaches
the ? -value of the best expert.
Theorem 5.1. Denote ?? = min(?, 1). For all ? > 0 and i,
?
?1/??
4c?
4r
? Z?0i ,
if
3 ?(2 ? ??)
? 2? ?
?
?
33u2
? Z0i
?
then
Pr inf Me (j) < ?e ? ? ?
exp ?
.
j?i
?2
43u2 r
Note from the definition of ? -values that we can only expect the average reward of expert e
to be close to ??e if the phase lengths when the expert is chosen are sufficiently large. This
is necessary to ensure that the bias term c? /s? , present in the definition of the ? -value, is
small. The condition on Z?0i reflects this observation. It ensures that each expert is chosen
sufficiently many phases; since phase lengths grow proportionally to the number of phases
an expert is chosen, this implies that phase lengths are large enough.
We can combine Theorems 3.1 and 5.1 to provide an overall bound on the difference of the
average reward achieved by the EEE method and the ? -value of the best expert.
Corollary 5.1. For all ? > 0, i0 and i,
?
?1/??
4c?
i?2
i0 ?
4r
? Z?0i0 , and (ii) Z?i0 i ? ? 2 ?
,
if (i)
3 ?(2 ? ??)
4u
4 ru
?
?
then Pr M (i) ? max ??e ? 3?
(4)
e
(
)
?
?
?
?
2
?2 Z?0i0
1
i?2
i0 ?
33u2
?i i
?
+
exp
?
?
Z
.
?
? 2 exp ?
0
?
43u2 r
2i 4 ru2
4u
Corollary 5.1 explicitly quantifies the tradeoff between exploration and exploitation. In
particular, one would like to choose pj such that Z?0i0 is large enough to make the first
term in the bound small, and Z?i0 i as small as possible. In Section 6, we analyze several
exploration schemes and their effect on the convergence rate of the EEE method.
Here we can also derive from Theorems 3.1 and 5.1 asymptotic guarantees for the EEE
method.
Corollary 5.2. If limi?? Z?0i = ?, then Pr (lim inf i?? Me (i) ? ??e ) = 1.
The following is an immediate result from Corollaries 3.1 and 5.2:
Corollary 5.3. If limi?? Z?0i = ? and limi?? Z?0i /i = 0, then
?
?
Pr lim inf M (i) ? max ??e = 1 .
i??
e
6
Exploration Schemes
The results of the previous sections hold under generic choices of the probabilities p i .
Here, we discuss how various particular choices affect the speed of exploiting accumulated
information, gathering new information and adapting to changes in the environment.
6.1
Explore-then-Exploit
One approach to determining exploration schemes is to minimize the upper bound provided
in Corollary 5.1. This gives rise to a scheme where the whole exploration takes place before
any exploitation. Indeed, according to expression (4), for any fixed number of iterations i,
it is optimal to let Z?0i0 = i0 (i.e., pj = 1 for all j ? i0 ) and Z?i0 i = 0 (i.e., pj = 0 for
all j > i0 ). Let U denote the upper bound given by (4). It can be shown that the smallest
number of phases i, such that U ? ?, is bounded between two polynomials in 1/?, u, and
r. Moreover, its dependence on the the total number of experts r is asymptotically O(r 1.5 ).
The main drawback of explore-then-exploit is its inability to adapt to changes in the policy
of the environment ? since the whole exploration occurs first, any change that occurs after
exploration has ended cannot be learned. Moreover, the choice of the last exploration phase
i0 depends on parameters of the problem that may not be observable. Finally, it requires
fixing ? and ? a priori, and can only achieve optimality within these tolerance parameters.
6.2
Polynomially Decreasing Exploration
In [2] asymptotic results were described that were equivalent to Corollaries 3.1 and 5.3
when pj = 1/j. This choice of exploration probabilities satisfies
lim Z?0i = ? and
i??
lim Z?0i /i = 0 ,
i??
so the corollaries apply. We have, however,
Z?0i0 ? log(i0 ) + 1 .
It follows that the total number of phases required for U to hold grows exponentially in
1/?, u and r. An alternative scheme, leading to polynomial complexity, can be developed
by choosing pj = j ?? , for some ? ? (0, 1). In this case,
and
(i0 + 1)1??
?1
Z?0i0 ?
1??
i1??
Z?0i ?
.
1??
It follows that the smallest number of phases that guarantees that U ? ? is on the order of
?
" 3?? 3?? ?
#!
? 1
2
1
u 1?? r 2(1??)
u2 1?? u ? r 2?
i = O max
,
log 2
.
2
3??
? ?
??
? 1??
6.3
Constant-Rate Exploration
The previous exploration schemes have the property that the frequency of exploration vanishes as the number of phases grows. This property is required in order to achieve the
asymptotic optimality results described in Corollaries 3.1 and 5.3. However, it also makes
the EEE method increasingly slower in tracking changes in the policy of the environment.
An alternative approach is to use a constant frequency pj = ? ? (0, 1) of exploration.
Constant-rate exploration does not satisfy the conditions of Corollaries 3.1 and 5.3. However, for any given tolerance level ?, the value of ? can be chosen so that
?
?
Pr lim inf M (i) ? max ??e ? ? = 1 .
e
i??
Moreover, constant-rate exploration yields complexity results similar to those of the
explore-then-exploit scheme. For example, given any tolerance level ?, if
??2
(j = 1, 2, . . .) ;
pj = ? 2
8 ru
then it follows that U ? ? if the number of phases i is on the order of
? 2 5
?
r u
u2
i=O
log
.
?5
?2 ?
6.4
Constant Phase Lengths
In all the variants of the EEE method considered so far, the number of stages per phase
increases linearly as a function of the number of phases during which the same expert has
been followed previously. This growth is used to ensure that, as long as the policy of the
environment exhibits some regularity, that regularity is captured by the algorithm. For
instance, if that policy is cyclic, then the EEE method correctly learns the long-term value
of each expert, regardless of the lengths of the cycles.
For practical purposes, it may be necessary to slow down the growth of phase lengths in
order to get some meaningful results in reasonable time. In this section, we consider the
possibility of a constant number L of stages in each phase. Following the same steps that
we took to prove Theorems 3.1, 3.2 and 5.1, we can derive the following results:
Theorem 6.1. If the EEE method is implemented with phases of fixed length L, then for all
i0 , i, and ?, such that
i0 ?
i?2
?
,
Z?i0 i ?
2u2
2u
the following bound holds:
(
?
?2 )
?
?
i0 ?
1 i?2
?
? Z?i0 i
.
Pr M (i) ? max min Me (j) ? 2? ? exp ?
e i0 +1?j?i
2i 2u2
2u
We can also characterize the expected difference between the average reward of EEE
method and that of the best expert.
Theorem 6.2. If the EEE method is implemented with phases of fixed length L, then for all
i0 ? i and ? > 0,
h
i
2u2 Z?i0 i
i0
.
E M (i) ? max min Me (i) ? ? ? ? u ?
e i0 +1?j?i
i
? i
Theorem 6.3. If the EEE method is implemented with phases of fixed length L ? 2, then
for all ? > 0,
?
?
?
?
c?
2L2 u2
?2 Z?0i
?
Pr inf Me (j) < ?e ? ? ? ? ?
? exp ? 2 2
.
j?i
L
?2
4L u r
An important qualitative difference between fixed-length phases and increasing-length ones
is the absence of the number of experts r in the bound given in Theorem 6.2. This implies
that, in the explore-then-exploit or constant-rate exploration schemes, the algorithm requires a number of phases which grows only linearly with r to ensure that
Pr(M (i) ? max Me? ? c/L? ? ?) ? ? .
e
Note, however, that we cannot ensure performance better than maxe ??e ? c? /L? .
References
[1] Auer, P., Cesa-Bianchi, N., Freund, Y. and Schapire, R.E. (1995) Gambling in a rigged casino:
The adversarial multi-armed bandit problem. In Proc. 36th Annual IEEE Symp. on Foundations
of Computer Science, pp. 322?331, Los Alamitos, CA: IEEE Computer Society Press.
[2] de Farias, D. P. and Megiddo, N. (2004) How to Combine Expert (and Novice) Advice when Actions Impact the Environment. In Advances in Neural Information Processing Systems 16, S. Thrun, L. Saul and B. Scho? lkopf, Eds., Cambridge, MA:MIT Press.
http://books.nips.cc/papers/files/nips16/NIPS2003 CN09.pdf
[3] Freund, Y. and Schapire, R.E. (1999) Adaptive game playing using multiplicative weights.
Games and Economic Behavior 29:79?103.
[4] Littlestone, N. and Warmuth, M.K. (1994) The weighted majority algorithm. Information and
Computation 108 (2):212?261.
| 2734 |@word exploitation:18 polynomial:2 achievable:5 rigged:1 instruction:1 cyclic:1 exclusively:2 past:5 reaction:1 current:7 com:1 must:2 realistic:1 designed:1 update:2 warmuth:1 capitalizes:1 vanishing:1 characterization:1 along:1 driver:4 qualitative:1 consists:2 prove:1 combine:3 symp:1 introduce:1 expected:7 indeed:2 behavior:4 market:1 roughly:1 multi:1 decreasing:1 company:1 actual:2 armed:1 increasing:1 provided:1 bounded:2 moreover:3 what:2 interpreted:2 developed:1 ended:1 guarantee:4 every:3 growth:2 megiddo:3 tie:1 control:1 before:1 negligible:1 engineering:1 mistake:4 analyzing:1 dynamically:1 practical:1 testing:1 regret:1 evolving:1 significantly:1 adapting:1 matching:1 word:2 road:1 induce:1 get:1 cannot:5 close:1 optimize:1 equivalent:1 center:1 go:1 regardless:2 starting:1 automaton:1 focused:1 notion:1 increment:2 play:1 strengthen:1 element:1 observed:1 capture:1 worst:1 ensures:1 cycle:1 observes:1 valuable:1 environment:26 vanishes:1 complexity:3 reward:18 depend:2 farias:2 various:5 represented:1 fast:4 describe:1 committing:1 choosing:2 fatal:2 commit:1 ru2:3 online:1 took:1 maximal:1 flexibility:2 achieve:4 everyday:1 los:1 exploiting:1 convergence:4 regularity:2 depending:2 develop:1 derive:3 fixing:1 implemented:3 implies:2 quantify:1 drawback:1 exploration:42 require:1 blending:1 obliviously:1 hold:5 sufficiently:2 considered:2 exp:6 achieves:2 consecutive:2 smallest:3 omitted:1 purpose:1 proc:1 currently:1 largest:1 establishes:1 reflects:1 weighted:1 mit:2 rather:3 avoid:1 corollary:12 derived:1 focus:1 impossibility:1 secure:1 adversarial:1 sense:1 accumulated:2 i0:45 bt:2 k53:1 bandit:1 i1:1 overall:1 almaden:2 denoted:4 priori:1 initialize:1 future:2 oblivious:1 phase:41 n1:2 attempt:2 possibility:3 weakness:1 analyzed:2 implication:1 capable:1 necessary:2 experience:2 littlestone:1 instance:1 uniform:1 characterize:4 chooses:1 rewarding:1 picking:3 together:1 cesa:1 choose:5 book:1 expert:76 leading:1 de:2 harry:1 b2:1 casino:1 satisfy:1 register:3 explicitly:2 depends:2 multiplicative:1 analyze:3 traffic:1 characterizes:2 sup:1 minimize:1 yield:1 generalize:1 lkopf:1 comparably:1 cc:1 history:3 ed:1 definition:8 against:1 frequency:3 pp:1 obvious:1 proof:1 proved:1 massachusetts:1 knowledge:3 lim:9 organized:1 subtle:1 carefully:1 actually:1 auer:1 follow:1 stage:20 receives:1 perhaps:1 grows:3 effect:2 concept:2 hence:2 during:7 game:3 criterion:1 pdf:1 performs:4 reasoning:1 nips2003:1 scho:1 exponentially:1 m1:2 accumulate:1 cambridge:2 counterexample:1 similarly:1 pointed:1 inf:9 route:1 certain:5 captured:1 minimum:1 additional:1 somewhat:1 mr:2 ii:2 multiple:2 adapt:1 long:5 impact:1 prediction:1 variant:6 essentially:1 iteration:3 achieved:5 addition:1 whereas:3 interval:1 grow:1 airline:4 sr:2 file:1 enough:2 reacts:2 affect:2 economic:1 tradeoff:6 whether:1 expression:1 speaking:1 action:9 repeatedly:1 ignored:1 se:4 aimed:1 proportionally:1 amount:1 nimrod:1 schapire:2 http:1 zj:2 per:1 correctly:1 pj:7 asymptotically:5 relaxation:1 run:2 jose:1 uncertainty:1 place:1 reader:1 reasonable:1 eee:23 decision:5 appendix:1 comparable:1 bound:14 followed:7 played:1 oracle:2 annual:1 aspect:1 speed:1 min:5 optimality:2 department:1 according:1 increasingly:1 making:1 s1:2 intuitively:1 pr:11 gathering:1 previously:1 daniela:1 discus:2 end:2 available:1 generalizes:1 apply:1 generic:1 alternative:2 slower:1 original:1 denotes:1 ensure:4 exploit:5 commits:1 especially:2 society:1 question:2 alamitos:1 occurs:2 blend:1 strategy:2 dependence:1 responds:1 nr:2 exhibit:1 thrun:1 majority:1 me:16 trivial:2 reason:1 ru:2 length:12 relationship:2 balance:1 acquire:1 stated:1 rise:1 proper:1 policy:6 perform:4 bianchi:1 upper:2 observation:1 finite:2 immediate:1 payoff:3 extended:1 arbitrary:1 introduced:1 mechanical:1 required:3 learned:2 established:1 nip:1 address:1 pattern:1 including:2 max:11 memory:1 suitable:1 event:1 natural:1 scheme:9 technology:1 ne:5 prior:1 l2:1 determining:1 asymptotic:4 relative:1 freund:2 loss:1 expect:1 limitation:1 proven:2 foundation:1 agent:9 s0:6 viewpoint:1 playing:1 pi:4 ibm:2 last:1 bias:1 institute:1 saul:1 characterizing:1 limi:4 penny:1 tolerance:3 world:1 evaluating:1 rum:1 made:2 adaptive:1 san:1 novice:1 far:1 polynomially:2 observable:1 sequentially:1 recommending:1 quantifies:1 learn:1 ca:2 necessarily:1 did:2 main:1 linearly:2 whole:2 repeated:1 advice:1 referred:1 gambling:1 slow:1 explicit:1 breaking:1 third:1 learns:5 theorem:13 down:1 specific:1 showing:1 learnable:2 exists:1 sequential:1 gained:1 simply:1 explore:5 tracking:1 recommendation:3 u2:10 acquiring:1 pucci:2 cite:1 satisfies:1 ma:2 viewed:1 hs0:1 price:3 absence:1 feasible:1 change:4 called:3 total:3 meaningful:1 maxe:1 inability:1 z0i:1 reactive:8 |
1,910 | 2,735 | Learning Hyper-Features for
Visual Identification
Andras Ferencz
Erik G. Learned-Miller
Jitendra Malik
Computer Science Division, EECS
University of California at Berkeley
Berkeley, CA 94720
Abstract
We address the problem of identifying specific instances of a class (cars)
from a set of images all belonging to that class. Although we cannot build
a model for any particular instance (as we may be provided with only one
?training? example of it), we can use information extracted from observing other members of the class. We pose this task as a learning problem,
in which the learner is given image pairs, labeled as matching or not, and
must discover which image features are most consistent for matching instances and discriminative for mismatches. We explore a patch based
representation, where we model the distributions of similarity measurements defined on the patches. Finally, we describe an algorithm that
selects the most salient patches based on a mutual information criterion.
This algorithm performs identification well for our challenging dataset
of car images, after matching only a few, well chosen patches.
1
Introduction
Figure 1 shows six cars: the two leftmost cars were captured by one camera; the right four
cars were seen later by another camera from a different angle. The goal is to determine
which images, if any, show the same vehicle. We call this task visual identification. Most
existing identification systems are aimed at biometric applications such as identifying fingerprints or faces. While object recognition is used loosely for several problems (including
this one), we differentiate visual identification, where the challenge is distinguishing between visually similar objects of one category (e.g. faces, cars), and categorization where
Figure 1: The Identification Problem: Which of these cars are the same? The two cars on the
left, photographed from camera 1, also drive past camera 2. Which of the four images on the right,
taken by camera 2, match the cars on the left? Solving this problem will enable applications such as
wide area tracking of cars with a sparse set of cameras [2, 9].
Figure 2: Detecting and warping car images into alignment: Our identification algorithm assumes
that a detection process has found members of the class and approximately aligned them to a canonical view. For our data set, detection is performed by a blob tracker. A projective warp to align the
sides is computed by calibrating the pose of the camera to the road and finding the wheels of the
vehicle. Note that this is only a rough approximation (the two warped images, center and right, are
far from perfectly aligned) that helps to simplify our patch descriptors and positional bookkeeping.
the algorithm must group together objects that belong to the same category but may be visually diverse[1, 5, 10, 13]. Identification is also distinct from ?object localization,? where
the goal is locating a specific object in scenes in which distractors have little similarity to
the target object [6].1
One characteristic of the identification problem is that the algorithm typically only receives
one positive example of each query class (e.g. a single image of a specific car), before
having to classify other images as the ?same? or ?different?. Given this lack of a class
specific training set, we cannot use standard supervised feature selection and classification
methods such as [12, 13, 14]. One possible solution to this problem is to try to pick universally good features, such as corners [4, 6], for detecting salient points. However, such
features are likely to be suboptimal as they are not category specific. Another possibility
is to hand-select good features for the task, such as the distance between the eyes for face
identification.
Here we present an identification framework that attempts to be more general. The core
idea is to use a training set of other image pairs from the category (in our case cars), labeled
as matching or not, to learn what characterizes features that are informative in distinguishing one instance from another (i.e. consistent for matching instances and dissimilar for
mismatches). Our algorithm, given a single novel query image, can build a ?same? vs.
?different? classifier by: (1) examining a set of candidate features (local image patches)
on the query image (2) selecting a small number of them that are likely to be the most
informative for this query class and (3) estimating a function for scoring the match for each
selected feature. Note that a different set of features (patches) will be selected for each
unique query.
The paper is organized as follows. In Section 2, we describe our decision framework including the decomposition of an image pair into bi-patches, which give local indications
of match or mismatch, and introduce the appearance distance between the two halves as
a discriminative statistic of bi-patches. This model is then refined in Section 3 by conditioning the distance distributions on hyper-features such as patch location, contrast, and
dominant orientation. A patch saliency measure based on the estimated distance distributions is introduced in Section 3.4. In Section 4, we extend our model to include another
comparison statistic, the difference in patch position between images. Finally, in Section 5,
we conclude and show that comparing a small number of well-chosen patches produces
performance nearly as good as matching a dense sampling of them.
2
Matching Patches
We seek to determine whether a new query image I L (the ?Left? image) represents the
same vehicle as any of our previously seen database images I R (the ?Right? image). We
assume that these images are known to contain vehicles, have been brought into rough
correspondence (in our data set, through a projective transformation that aligns the sides
of the car) and have been scaled to approximately 200 pixels in length (see Figure 2 for
details).
1
There is evidence that this distinction exists in the human visual system. Some findings suggest
that the fusiform face area is specialized for identification of instances from familiar categories[11].
Figure 3: Patch Matching: The left (query) image is sampled (red dots) by patches encoded as
oriented filter channels (for labeled patch 2, this encoding is shown). Each patch is matched to the
best point in the database image of the same car by maximizing the appearance similarity between the
patches (the similarity score is indicated by the size and color of the dots, where larger and redder is
more similar). Three bi-patches are labeled. Although the classification result for this pair of images
should be ?same? (C = 1), notice that some bi-patches are better predictors of this result than others
(the similarity score of 2 & 3 is much better than for patch 1). Our goal is to be able to predict the
distribution of P (d|C = 1) and P (d|C = 0) for each patch accurately based on the appearance and
position of the patch in the query image (for the 3 patches, our predictions are shown on the right).
2.1
Image Patch Features
Our strategy is to break up the whole image comparison problem into multiple local matching problems, where we encode a small patch FjL (1 ? j ? n) of the query image I L and
compare each piece separately [12, 14]. As the exact choice of features, their encoding and
comparison metric is not crucial to our technique, we chose a fairly simple representation
that was general enough to use in a wide variety of settings, but informative enough to
capture the details of objects (given the subtle variation that can distinguish two different
cars, features such as [6] were found not to be precise enough for this task).
Specifically, we apply a first derivative Gaussian odd-symmetric filter to the patch at four
orientations (horizontal, vertical, and two diagonal), giving four signed numbers per pixel.
To compare a query patch FjL to an area of the right image FjR , we encode both patches
as 4 ? 252 length vectors (4 orientations per pixel) and compute the normalized correlation
(dj = 1 ? CorrCoef(FjL , FjR )) between these vectors. As the two car images are in rough
alignment, we need only to search a small area of I R to find the best corresponding patch
FjR - i.e. the one that minimizes dj . We will refer to such a matched left and right patch
pair FjL , FjR , together with the derived distance dj , as a bi-patch Fj .
2.2
The Decision Rule
We pose the task of deciding if the a database image I R is the same as a query image I L as
a decision rule
R=
P (I L , I R |C = 1)P (C = 1)
P (C = 1|I L , I R )
=
> ?.
L
R
P (C = 0|I , I )
P (I L , I R |C = 0)P (C = 0)
(1)
where ? is chosen to balance the cost of the two types of decision errors. The priors are
assumed to be known.2 Specifically, for the remaining equations in this paper, the priors are
assumed to be equal, and hence are dropped from subsequent equations. With our image
decomposition into patches, the posteriors from Eq. (1) will be approximated using the bipatches F1 , ..., Fn as P (C|I L , I R ) ? P (C|F1 , ..., Fm ) ? P (F1 , ..., Fm |C). Furthermore,
in this paper, we will assume a naive Bayes model in which, conditioned on C, the bipatches are assumed to be independent. That is,
m
Y
P (I L , I R |C = 1)
P (F1 , ..., Fm |C = 1)
P (Fj |C = 1)
R=
?
=
.
P (I L , I R |C = 0)
P (F1 , ..., Fm |C = 0) j=1 P (Fj |C = 0)
2
For our application, dynamic models of traffic flow can supply the prior on P (C).
(2)
In practice, we compute the log of this likelihood ratio, where each patch contributes an
additive term (denoted LLRi for patch i). Modeling the likelihoods in this ratio (P (Fj |C))
is the central focus of this paper.
2.3
Uniform Appearance Model
The most straightforward way to estimate P (Fj |C) is to assume that the appearance difference dj captures all of the
information Fj about the probability of
a match (i.e. C and Fj are independent
given dj ), and that all of dj ?s from all
patches are identically distributed. Thus
the decision rule, Eqn. 1, becomes
R?
m
Y
P (dj |C = 1)
> ?.
P (dj |C = 0)
j=1
(3)
The two conditional distributions,
P (dj | C ? {0, 1}), are estimated as normalized histograms from all bi-patches
matched within the training data.3 For
each value of ?, we evaluate Eqn.(3) to
classify each test pair as matching or
not, producing a precision-recall curve.
Figure 4 compares this patch-based
model to a direct image comparison
method.4 Notice that even this naive
patch-based technique significantly
outperforms the global matching.
3
Figure 4: Identification using appearance differences: The bottom curve shows the precision
vs. recall for non-patch based direct comparison of
rectified images. (An ideal precision-recall curve
would reach the top right corner.) Notice that all
three patch based models outperform this method.
The three top curves show results for various models of dj from Sections 2.3 (Baseline), 3.1 (Discrete), and 3.2 & 3.3 (Continuous). The regression
model outperforms the uniform one significantly it reduces the error in precision by close to 50%
for most values of recall below 90%.
Refining the Appearance Distributions with Hyper-Features
The most significant weakness of the above model is the assumption that the d j ?s from
different bi-patches should be identically distributed (observe the 3 labeled patches in Figure 3). When a training set of ?same? (C = 1) and ?different? (C = 0) images is available
for a specific query image, estimating these distributions directly for each patch is straightforward. How can we estimate a distribution for P (dj |C = 1), where FjL is a patch from
a new query image, when we only have that single positive example of FjL ? The intuitive
answer: by finding analogous patches in the training set of labeled (same/different) image
pairs. However, since the space of all possible patches (appearance & position, < 25?25+2 )
is very large, the chance of having seen a very similar patch to FjL in the training set is
small. In the next sections we present two approaches both of which rely on projecting F jL
into a much lower dimensional space by extracting meaningful features from its position
and appearance (the hyper-features).
3.1
Non-Parametric Model with Discrete Hyper-Features
First we attempted a non-parametric approach, where we model the joint distribution of dj and a few hyper-features (e.g. the x and y coordinate of the patch FjL ,
3
Data consisted of 175 pairs (88 training, 87 test pairs) of matching car images (C=1) from two
cameras located on the same side of the street one block apart. Within training and testing sets, about
4000 pairs of mismatched cars (C=0) were formed from non-corresponding images, one from each
camera. All comparisons were performed on grayscale (not color) images.
4
The global image comparison method used here as a baseline technique uses normalized correlation on a combination of intensity and filter channels, and attempts to overcome slight misalignment.
Figure 5: Fitting a GLM to the ? distribution: we demonstrate our approach by fitting a gamma
distribution, through the latent variables ? = (?, ?), to the y position of the patches. Here we
allowed ? and ? to be a 3rd degree polynomial function of y (i.e. Z = [y 3 , y2 , y, 1]T ). The centerleft square shows, on each row, a distribution of d conditioned on the y position of the left patch (F L )
for each bi-patch, for training data taken from matching vehicles. The center-right square shows the
same distributions for mismatched data. The height of histogram distributions is color-coded, dark
red indicating higher density. The central curve shows the polynomial fit to the conditional means,
while the outer curves show the ?? range. For reference, we include a partial image of a car whose
y-coordinate is aligned with the center images. On the right, we show two histogram plots, each
corresponding to one row of the center images (a small range of y corresponding to the black arrows).
The resulting gamma distributions are superimposed on the histograms.
i.e. Z = [x, y]). The distribution is modeled ?non-parametrically? (similar to Section 2.3) using an N-dimensional normalized histogram where each dimension (d,x, and
y) has been quantized into several bins. In this model P (C|Fj ) ? P (C|dj , yj , xj ) ?
P (dj |yj , xj , C)P (yj , xj |C)P (C) ? P (dj |yj , xj , C), where the last formula follows from
the assumption of equal priors (P (C) = 0.5) and the independence of (yj , xj ) and C. The
Discrete Hyper-Features curve in Figure 4 shows the performance gain from conditioning
on these positional hyper-features.
3.2
Parametric Model with Continuous Hyper-Features
The drawback of using a non-parametric model for the distributions is that the amount
of data needed to populate the histograms grows exponentially with the number of dimensions. In order to add additional appearance-based hyper-features, such as contrast,
oriented edge energy, etc., we moved to a smooth parametric representation for both the
distribution of dj and the model by which the the hyper-features influence this distribution.
Specifically, we model the distributions P (dj |C = 1) and P (dj |C = 0) as gamma distributions (notated ?()) parameterized by the mean and shape parameter ? = {?, ?} (see
the right panel of Figure 5 for examples of the ?() fitting the empirical distributions). The
smooth variation of ? with respect to the hyper-features can be modeled using a generalized linear model (GLM). Ordinary (least-squares) linear models assume that the data is
normally distributed with constant variance. GLMs are extensions to ordinary linear models that can fit data which is not normally distributed and where the dispersion parameter
also depends on the covariates (see [7] for more information on GLMs).
Our goal is to fit gamma distributions to the distributions of d values for various patches by
maximizing the probability density of data under gamma distributions whose parameters
are simple polynomial functions of the hyper-features. Consider a set X1 , ..., Xk of hyperT
features such as position, contrast, and brightness of a patch. Let Z = [Z1 , ..., Zl ] be a
vector of l pre-chosen functions of those hyper-features, like squares, cubes, cross terms,
or simply copies of the variables themselves. Then each bi-patch distance distribution has
the form
?
?
? Z),
(4)
P (d|X1 , X2 , ..., Xk , C) = ?(d; ?C
? Z, ?C
5
where the second and third arguments to ?() are mean and shape parameters. Each ?
?
?
?
?
, ?C=1
) is a vector of parameters of length l
(there are four of these: ?C=0
, ?C=0
, ?C=1
5
For the GLM, we use the identity link function for both ? and ?. While the identity is not
the canonical link function for ?, its advantage is that our ML optimization can be initialized by
that weights each hyper-feature monomial Zi . The ??s are adapted to maximize the joint
data likelihood over all patches for C = 0 or C = 1 withing the training set. These ideas
are illustrated in detail in Figure 5.
3.3
Automatic Selection of Hyper-Features
In this section we describe the automatic determination of Z. Recall that in our GLM model
we assumed a linear relationship between Z and ?, ?. This allows us to use standard feature
selection techniques, such as Least Angle Regression (LARS)[3], to choose a few (around
10) hyper-features from a large set of candidates,6 such as: (a) the x and y positions of F L ,
(b) the intensity and contrast within F L and the average intensity of the entire vehicle, (c)
the average energy in each of the 8 oriented filter channels, and (d) derived quantities from
the above (e.g. square, cubic, and cross terms). LARS was then asked to choose Z from
these features. Once Z is set, we proceed as in Section 3.2.
Running an automatic feature selection technique on this large set of possible conditioning
features gives us a principled method of reducing the complexity of our model. Reducing
the complexity is important not only to speed up computation, but also to mitigate the risk
of over-fitting to the training set. The top curve in Figure 4 shows results when Z includes
the first 10 features found by LARS. Even with such a naive set of features to choose from,
the performance of the system improves significantly.
3.4
Estimating the Saliency of a Patch
From the distributions P (dj |C = 0) and P (dj |C = 1) computed separately for each patch,
it is also possible to estimate the saliency of the patch, i.e. the amount of information about
our decision variable C we are likely to gain should we compute the best corresponding
FjR . Intuitively, if the distribution of Dj is very different for C = 0 and C = 1, then
the amount of information gained by matching patch j is likely to be large (see the 3
distributions on the right of Figure 3). To emphasize the fact that the distribution P (d j |C)
is a fixed function of FjL , given the learned hyper-feature weights ?, we slightly abuse
notation and refer to the random variable from which dj is sampled as FjL .
With this notation, computing the mutual information between FjL and C gives us a measure of the expected information gain from a patch with particular hyper-features:
I(FjL ; C) = H(FjL ) ? H(FjL |C).
Here H() is Shannon entropy. The key fact to notice is that this measure can be computed
just from the estimated distributions over dj (which, in turn, were estimated from the hyperfeatures of FjL ) before the patch has been matched. This allows us to match only those
patches that are likely to be informative, leading to significant computational savings.
4
Modeling Appearance and Position Differences
In the last section, we only considered the similarity of two matching patches that make up
a bi-patch in terms of the appearance of the patches (dj ). Recall that for each left patch
FjL , a matching right patch FjR is found by searching for the most similar patch in some
large neighborhood around the expected location for the match. In this section, we show
how to model the change in position, rj , of the match relative to its expected location, and
how this, when combined with the appearance model, improves the matching performance.
solving an ordinary least squares problem. We experimentally compared it to the canonical inverse
?T
link (? = (?C
? Z)?1 ), but observed no noticeable change in performance on our data set.
6
In order to use LARS (or most other feature selection methods) ?out of the box?, we use regression based on an L2 loss function. While this is not optimal for non-normal data, from experiments
we have verified that it is a reasonable approximation for the feature selection step.
Figure 6: Results: The LEFT plot shows precision vs. recall curves for models of r. The results
for ?x and ?y are shown separately (as there are often more horizontal than vertical features on cars,
?y is better). Re-estimating parameters of the global alignment, W (affine fit), significantly improves
the curves. Finally, performance is improved by combining position with appearance (?Complete?
curve) compared to using appearance alone. The CENTER pair of images show a correct match,
with the patch centers indicated by circles. The color of the circles in the top image indicates MI j ,
in bottom image LLRj . Our patch selection algorithm chooses the top patches based on MI where
subsequent patches are penalized for overlapping with earlier ones (neighborhood suppression). The
top 10 ?left? patches chosen are marked with arrows connecting them to the corresponding ?right?
patches. Notice that these are concentrated in informative regions. The RIGHT plot quantifies this
observation: the curves show 3 different methods of choosing the order of patches - random order,
MI and MI with neighborhood suppression. Notice that this top curve with 3 patches does as well
as the direct comparison method. All 3 methods converge above 50 patches.
Let rj = (?xj , ?yj ) be the difference in position between the coordinates of FjL and FjR
within the standardized coordinate frames. Generally, we expect rj ? 0 if the two images
portray the same object (C = 1). The estimate for R, incorporating the information from
both d and r becomes
m
Y
P (rj |dj , Zj , C = 1)P (dj |Zj , C = 1)
R?
,
(5)
P
(rj |dj , Zj , C = 0)P (dj |Zj , C = 0)
j=1
where Zj again refers to a set of hyper-features.
Here we focus on the first factor, where the distribution of rj given C is dependent on the
appearance and position of the left patch (FjL , through the hyper-features Zj ) and on the
similarity in appearance (dj ). The intuition for the dependence on dj is that for the C = 1
case, we expect rj to be smaller on average when a good appearance match (small dj ) was
found.
Following our approach for dj , we model the distribution of rj as a 0 mean normal distribution, N (0, ?) , where ? (we use a diagonal covariance) is a function of Z j ,dj . The
parameterization of (Zj ,dj ) is found through feature selection, while the weights for the
linear function are obtained by maximizing the likelihood of rj over the training data. To
address initial misalignment, we select a small number of patches, match them, and compute a global affine alignment between the images. We subsequently score each match
relative to this global alignment.
The bottom four curves of Figure 6 show that fitting an affine model first significantly
improves the positional signal. While position seems to be less informative than appearance, the complete model, which combines appearance and position (Eq. 5), outperforms
appearance alone.
5
Conclusion
The center and right sides of Figure 6 show our ability to select the most informative
patches using the estimated mutual information I(FjL , C) of each patch. To prevent spatially overlapping patches from being chosen, we added a penalty factor to the mutual in-
formation score that penalizes patches that are very close to other chosen patches (MI with
neighborhood suppression). To give a numerical indication of the performance, we note
that with only 10 patches, given a 1-to-87 forced choice problem, our algorithm chooses
the correct matching image 93% of the time.
A different approach to a learning problem that is similar to ours can be found in [5, 8],
which describe methods for learning character or object categories from few training examples. These works approach this problem by learning distributions on shared factors
[8] or priors on parameters of fixed distributions for a category [5] where the training data
consists of images from other categories. We, on the other hand, abandon the notion of
building a model with a fixed form for an object from a single example. Instead, we take
a discriminative approach and model the statistical properties of image patch differences
conditioned on properties of the patch. These learned conditional distributions allow us to
evaluate, for each feature, the amount of information potentially gained by matching it to
the other image.7
Acknowledgments
This work was partially funded by DARPA under the Combat Zones That See project.
References
[1] Y. Amit and D. Geman. A computational model for visual selection. Neural Computation,
11(7), 1999.
[2] D. Beymer, P. McLauchlan, B. Coifman, and J. Malik. A real-time computer vision system for
measuring traffic parameters. CVPR, 1997.
[3] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics,
32(2):407?499, 2004.
[4] T. Kadir and M. Brady. Scale, saliency and image description. International Journal of Computer Vision, 45(2):83?105, 2001.
[5] F. Li, R. Fergus, and P. Perona. A Bayesian approach to unsupervised one-shot learning of
object categories. In ICCV, 2003.
[6] D. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of
Computer Vision, 60(2):91?110, 2004.
[7] P. McCullagh and J. A. Nelder. Generalized Linear Models. Chapman and Hall, 1989.
[8] E. Miller, N. Matsakis, and P. Viola. Learning from one example through shared densities on
transforms. In CVPR, 2000.
[9] H. Pasula, S. Russell, M. Ostland, and Y. Ritov. Tracking many objects with many sensors.
IJCAI, 1999.
[10] H. Schneiderman and T. Kanade. A statistical approach to 3d object detection applied to faces
and cars. CVPR, 2000.
[11] M. Tarr and I. Gauthier. FFA: A flexible fusiform area for subordinate-level visual processing
automatized by expertise. Nature Neuroscience, 3(8):764?769, 2000.
[12] M. Vidal-Naquet and S. Ullman. Object recognition with informative features and linear classification. In International Conference on Computer Vision, 2003.
[13] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In
CVPR, 2001.
[14] M. Weber, M. Welling, and P. Perona. Unsupervised learning of models for recognition. ECCV,
2000.
7
Answer to Figure 1: top left matches bottom center; bottom left matches bottom right. For our
algorithm, matching these images was not a challenge.
| 2735 |@word fusiform:2 polynomial:3 seems:1 seek:1 decomposition:2 covariance:1 pick:1 brightness:1 shot:1 initial:1 score:4 selecting:1 ours:1 past:1 existing:1 outperforms:3 comparing:1 must:2 fn:1 subsequent:2 additive:1 informative:8 numerical:1 shape:2 plot:3 v:3 alone:2 half:1 selected:2 parameterization:1 xk:2 core:1 detecting:2 quantized:1 location:3 height:1 direct:3 supply:1 consists:1 fitting:5 combine:1 ostland:1 introduce:1 coifman:1 expected:3 rapid:1 themselves:1 little:1 becomes:2 provided:1 discover:1 estimating:4 matched:4 panel:1 notation:2 project:1 what:1 minimizes:1 finding:3 transformation:1 brady:1 berkeley:2 mitigate:1 combat:1 classifier:1 scaled:1 zl:1 normally:2 producing:1 positive:2 before:2 dropped:1 local:3 encoding:2 approximately:2 abuse:1 signed:1 chose:1 black:1 challenging:1 projective:2 bi:10 range:2 unique:1 camera:9 acknowledgment:1 testing:1 yj:6 practice:1 block:1 area:5 empirical:1 significantly:5 cascade:1 matching:20 pre:1 road:1 refers:1 suggest:1 cannot:2 wheel:1 selection:9 close:2 risk:1 influence:1 center:8 maximizing:3 straightforward:2 identifying:2 rule:3 searching:1 notion:1 variation:2 coordinate:4 analogous:1 annals:1 target:1 exact:1 distinguishing:2 us:1 recognition:3 approximated:1 located:1 geman:1 labeled:6 database:3 bottom:6 observed:1 capture:2 region:1 russell:1 principled:1 intuition:1 complexity:2 covariates:1 asked:1 dynamic:1 solving:2 localization:1 division:1 distinctive:1 learner:1 misalignment:2 joint:2 darpa:1 various:2 distinct:1 forced:1 describe:4 query:13 hyper:21 neighborhood:4 refined:1 choosing:1 formation:1 whose:2 encoded:1 larger:1 kadir:1 cvpr:4 ability:1 statistic:3 abandon:1 differentiate:1 blob:1 indication:2 advantage:1 aligned:3 combining:1 intuitive:1 moved:1 description:1 ijcai:1 produce:1 categorization:1 object:15 help:1 pose:3 odd:1 noticeable:1 eq:2 drawback:1 correct:2 filter:4 lars:4 subsequently:1 human:1 enable:1 bin:1 subordinate:1 f1:5 extension:1 tracker:1 around:2 considered:1 normal:2 visually:2 deciding:1 hall:1 predict:1 rough:3 brought:1 sensor:1 gaussian:1 boosted:1 encode:2 derived:2 focus:2 refining:1 likelihood:4 superimposed:1 indicates:1 contrast:4 suppression:3 baseline:2 dependent:1 typically:1 entire:1 perona:2 selects:1 pixel:3 biometric:1 classification:3 orientation:3 flexible:1 denoted:1 fairly:1 mutual:4 cube:1 equal:2 once:1 saving:1 having:2 sampling:1 chapman:1 tarr:1 represents:1 jones:1 unsupervised:2 nearly:1 others:1 simplify:1 fjr:7 few:4 oriented:3 gamma:5 familiar:1 attempt:2 detection:4 possibility:1 alignment:5 weakness:1 edge:1 partial:1 loosely:1 penalizes:1 initialized:1 re:1 withing:1 circle:2 instance:6 classify:2 modeling:2 earlier:1 measuring:1 ordinary:3 cost:1 parametrically:1 predictor:1 uniform:2 examining:1 answer:2 eec:1 combined:1 chooses:2 density:3 international:3 together:2 connecting:1 again:1 central:2 choose:3 corner:2 warped:1 derivative:1 leading:1 ullman:1 li:1 includes:1 jitendra:1 depends:1 piece:1 later:1 vehicle:6 view:1 performed:2 try:1 observing:1 characterizes:1 red:2 traffic:2 bayes:1 break:1 lowe:1 formed:1 square:6 descriptor:1 characteristic:1 variance:1 miller:2 saliency:4 identification:13 bayesian:1 accurately:1 expertise:1 drive:1 rectified:1 reach:1 aligns:1 energy:2 mi:5 redder:1 sampled:2 gain:3 dataset:1 notated:1 automatized:1 recall:7 distractors:1 car:22 color:4 improves:4 organized:1 subtle:1 efron:1 higher:1 supervised:1 improved:1 ritov:1 box:1 furthermore:1 just:1 correlation:2 glms:2 hand:2 receives:1 horizontal:2 eqn:2 pasula:1 gauthier:1 overlapping:2 lack:1 indicated:2 grows:1 building:1 fjl:19 calibrating:1 contain:1 normalized:4 consisted:1 y2:1 hence:1 spatially:1 symmetric:1 illustrated:1 criterion:1 leftmost:1 generalized:2 complete:2 demonstrate:1 performs:1 fj:8 image:58 weber:1 novel:1 bookkeeping:1 specialized:1 conditioning:3 exponentially:1 jl:1 belong:1 extend:1 slight:1 measurement:1 refer:2 significant:2 rd:1 automatic:3 fingerprint:1 dot:2 dj:34 funded:1 similarity:7 etc:1 align:1 add:1 dominant:1 posterior:1 apart:1 scoring:1 captured:1 seen:3 additional:1 determine:2 maximize:1 converge:1 signal:1 multiple:1 rj:9 reduces:1 keypoints:1 smooth:2 match:13 determination:1 cross:2 coded:1 prediction:1 regression:4 vision:4 metric:1 histogram:6 separately:3 crucial:1 member:2 flow:1 call:1 extracting:1 ideal:1 enough:3 identically:2 variety:1 xj:6 fit:4 independence:1 zi:1 hastie:1 perfectly:1 suboptimal:1 fm:4 idea:2 whether:1 six:1 penalty:1 locating:1 proceed:1 generally:1 aimed:1 amount:4 transforms:1 dark:1 concentrated:1 category:9 outperform:1 hyperfeatures:1 canonical:3 zj:7 notice:6 estimated:5 neuroscience:1 per:2 tibshirani:1 diverse:1 discrete:3 group:1 key:1 salient:2 four:6 prevent:1 verified:1 angle:3 parameterized:1 inverse:1 schneiderman:1 reasonable:1 patch:89 decision:6 distinguish:1 correspondence:1 adapted:1 scene:1 x2:1 speed:1 argument:1 photographed:1 combination:1 belonging:1 smaller:1 slightly:1 character:1 projecting:1 intuitively:1 iccv:1 invariant:1 glm:4 taken:2 equation:2 previously:1 turn:1 needed:1 available:1 vidal:1 apply:1 observe:1 matsakis:1 assumes:1 remaining:1 include:2 top:8 running:1 standardized:1 giving:1 build:2 corrcoef:1 amit:1 warping:1 malik:2 added:1 quantity:1 strategy:1 parametric:5 dependence:1 diagonal:2 distance:6 link:3 street:1 outer:1 erik:1 length:3 modeled:2 relationship:1 ratio:2 balance:1 potentially:1 vertical:2 observation:1 dispersion:1 viola:2 precise:1 frame:1 intensity:3 introduced:1 pair:11 z1:1 california:1 learned:3 distinction:1 address:2 able:1 below:1 mismatch:3 challenge:2 including:2 rely:1 eye:1 portray:1 naive:3 prior:5 l2:1 relative:2 loss:1 expect:2 degree:1 affine:3 consistent:2 row:2 eccv:1 penalized:1 last:2 copy:1 ferencz:1 populate:1 side:4 monomial:1 allow:1 warp:1 mismatched:2 wide:2 johnstone:1 face:5 sparse:1 distributed:4 curve:14 overcome:1 dimension:2 universally:1 far:1 welling:1 emphasize:1 ml:1 global:5 conclude:1 assumed:4 nelder:1 discriminative:3 fergus:1 grayscale:1 search:1 continuous:2 latent:1 quantifies:1 kanade:1 nature:1 learn:1 channel:3 ca:1 contributes:1 dense:1 arrow:2 whole:1 allowed:1 x1:2 cubic:1 precision:5 position:15 andras:1 candidate:2 third:1 formula:1 ffa:1 specific:6 evidence:1 exists:1 incorporating:1 gained:2 conditioned:3 entropy:1 simply:1 explore:1 likely:5 appearance:21 beymer:1 visual:6 positional:3 tracking:2 partially:1 chance:1 extracted:1 conditional:3 goal:4 identity:2 marked:1 shared:2 naquet:1 change:2 experimentally:1 mccullagh:1 specifically:3 reducing:2 attempted:1 shannon:1 meaningful:1 indicating:1 select:3 zone:1 dissimilar:1 evaluate:2 |
1,911 | 2,736 | Nonlinear Blind Source Separation by
Integrating Independent Component Analysis
and Slow Feature Analysis
Tobias Blaschke
Institute for Theoretical Biology
Humboldt University Berlin
Invalidenstra?e 43, D-10115 Berlin, Germany
[email protected]
Laurenz Wiskott
Institute for Theoretical Biology
Humboldt University Berlin
Invalidenstra?e 43, D-10115 Berlin, Germany
[email protected]
Abstract
In contrast to the equivalence of linear blind source separation and linear
independent component analysis it is not possible to recover the original source signal from some unknown nonlinear transformations of the
sources using only the independence assumption. Integrating the objectives of statistical independence and temporal slowness removes this
indeterminacy leading to a new method for nonlinear blind source separation. The principle of temporal slowness is adopted from slow feature
analysis, an unsupervised method to extract slowly varying features from
a given observed vectorial signal. The performance of the algorithm is
demonstrated on nonlinearly mixed speech data.
1
Introduction
Unlike in the linear case the nonlinear Blind Source Separation (BSS) problem can not be
solved solely based on the principle of statistical independence [1, 2]. Performing nonlinear BSS with Independent Component Analysis (ICA) requires additional information
about the underlying sources or to regularize the nonlinearities. Since source signal components are usually more slowly varying than any nonlinear mixture of them we consider
to require the estimated sources to be as slowly varying as possible. This can be achieved
by incorporating ideas from Slow Feature Analysis (SFA) [3] into ICA.
After a short introduction to linear BSS, nonlinear BSS, and SFA we will show a way how
to combine SFA and ICA to obtain an algorithm that solves the nonlinear BSS problem.
2
Linear Blind Source Separation
T
Let x(t) = [x1 (t) , . . . , xN (t)] be a linear mixture of a source signal s(t) =
T
[s1 (t) , . . . , sN (t)] and defined by
x (t)
= As (t) ,
(1)
with an invertible N ? N mixing matrix A. Finding a mapping
u (t)
= QWx (t)
(2)
such that the components of u are mutually statistically independent is called Independent Component Analysis (ICA). The mapping is often divided into a whitening mapping
W, resulting in uncorrelated signal components yi with unit variance and a successive orthogonal transformation Q, because one can show [4] that after whitening an orthogonal
transformation is sufficient to obtain independence. It is well known that ICA solves the
linear BSS problem [4]. There exists a variety of algorithms performing ICA and therefore
BSS (see e.g. [5, 6, 7]). Here we focus on a method using only second-order statistics
introduced by Molgedey and Schuster [8]. The method consists of optimizing an objective
function subject to minimization, which can be written as
?
?2
N
N
N
2
X
X
X
(y)
(u)
?
Q?? Q?? C?? (? )? ,
(3)
?ICA (Q) =
C?? (? ) =
?,?=1
?6=?
?,?=1
?6=?
?,?=1
(y)
operating on the already whitened signal y. C?? (? ) is an entry of a symmetrized time
delayed covariance matrix defined by
E
D
T
T
,
(4)
C(y) (? ) =
y (t) y (t + ? ) + y (t + ? ) y (t)
and C(u) (? ) is defined correspondingly. Q?? denotes an entry of Q. Minimization of
?ICA can be understood intuitively as finding an orthogonal matrix Q that diagonalizes the
covariance matrix with time delay ? . Since, because of the whitening, the instantaneous
covariance matrix is already diagonal this results in signal components that are decorrelated
instantaneously and at a given time delay ? . This can be sufficient to achieve statistical
independence [9].
2.1
Nonlinear BSS and ICA
An obvious extension to the linear mixing model (1) has the form
x (t) = F (s (t)) ,
N
(5)
M
with a function F (? ) R ? R that maps N -dimensional source vectors s onto M dimensional signal vectors x. The components xi of the observable are a nonlinear mixture
of the sources and like in the linear case source signal components si are assumed to be
mutually statistically independent. Extracting the source signal is in general only possible
if F (? ) is an invertible function, which we will assume from now on.
The equivalence of BSS and ICA in the linear case does in general not hold for a nonlinear
function F (? ) [1, 2]. To solve the nonlinear BSS problem additional constraints on the
mixture or the estimated signals are needed to bridge the gap between ICA and BSS. Here
we propose a new way to achieve this by adding a slowness objective to the independence
objective of pure ICA. Assume for example a sinusoidal signal component x i = sin (2?t)
and a second component that is the square of the first xj = x2i = 0.5 (1 ? cos (4?t))
is given. The second component is more quickly varying due to the frequency doubling
induced by the squaring. Typically nonlinear mixtures of signal components are more
quickly varying than the original components. To extract the right source components one
should therefore prefer the slowly varying ones. The concept of slowness is used in our
approach to nonlinear BSS by combining an ICA part that provides the independence of
the estimated source signal components with a part that prefers slowly varying signals over
more quickly varying ones. In the next section we will give a short introduction to Slow
Feature Analysis (SFA) building the basis of the second part of our method.
3
Slow Feature Analysis
T
Assume a vectorial input signal x(t) = [x1 (t), . . . , xM (t)] is given. The objective of SFA
is to find an in general nonlinear input-output function u (t) = g (x (t)) with g (x (t)) =
T
[g1 (x (t)) , . . . , gR (x (t))] such that the ui (t) are varying as slowly as possible. This
can be achieved by successively minimizing the objective function
?(ui ) := u? 2i
(6)
for each ui under the constraints
hui i = 0
2
ui
= 1
hui uj i = 0 ? j < i
(zero mean),
(7)
(unit variance),
(decorrelation and order).
(8)
(9)
Constraints (7) and (8) ensure that the solution will not be the trivial solution u i = const.
Constraint (9) provides uncorrelated output signal components and thus guarantees that
different components carry different information. Intuitively we are searching for signal
components ui that have on average a small slope.
Interestingly Slow Feature Analysis (SFA) can be reformulated with an objective function
similar to second-order ICA, subject to maximization [10],
?
?2
M
M
M
2 X
X
X
(y)
(u)
?
?SFA (Q) =
C??
(? ) =
Q?? Q?? C?? (? )? .
(10)
?=1
?=1
?,?=1
To understand (10) intuitively we notice that slowly varying signal components are easier
to predict, and should therefore have strong auto correlations in time. Thus, maximizing
the time delayed variances produces slowly varying signal components.
4
Independent Slow Feature Analysis
If we combine ICA and SFA we obtain a method we refer to as Independent Slow Feature
Analysis (ISFA) that recovers independent components out of a nonlinear mixture using a
combination of SFA and second-order ICA. As already explained, second-order ICA tends
to make the output components independent and SFA tends to make them slow. Since
we are dealing with a nonlinear mixture we first compute a nonlinearly expanded signal
z = h (x) with h (? ) RM ? RL being typically monomials up to a given degree, e.g. an
expansion with monomials up to second degree can be written as
T
h (x (t)) = [x1 , . . . , xN , x1 x1 , x1 x2 , . . . , xM xM ] ? hT0
(11)
when given an M -dimensional signal x. The constant vector hT0 is used to make the expanded signal mean free. In a second step z is whitened to obtain y = Wz. Thirdly we
apply linear ICA combined with linear SFA on y in order to find the estimated source signal
u. Because of the whitening we know that ISFA, like ICA and SFA, is solved by finding
an orthogonal L ? L matrix Q. We write the estimated source signal u as
u
v=
= Qy = QWz = QWh (x) ,
(12)
?
u
? , since R, the dimension of the estimated source signal u, is usually
where we introduced u
much smaller than L, the dimension of the expanded signal. While the ui are statistically
independent and slowly varying the components u?i are more quickly varying and may be
statistically dependent on each other as well as on the selected components.
To summarize, we have an M dimensional input x an L dimensional nonlinearly expanded
and whitened y and an R dimensional estimated source signal u. ISFA searches an R
dimensional subspace such that the ui are independent and slowly varying. This is achieved
at the expense of all u?i .
4.1
Objective function
To recover R source signal components ui i = 1, . . . , R out of an L-dimensional expanded
and whitened signal y the objective reads
R
R
2
2
X
X
(u)
(u)
?ISFA (u1 , . . . , uR ; ? ) = bICA
C?? (? ) ? bSFA
C??
(? ) , (13)
?=1
?,?=1,
?6=?
where we simply combine the ICA objective (3) and SFA objective (10) weighted by the
factors bICA and bSFA , respectively. Note that the ICA objective is usually applied to the
linear case to unmix the linear whitened mixture y whereas here it is used on the nonlinearly
expanded whitened signal y = Wz. ISFA tries to minimize ?ISFA which is the reason why
the SFA part has a negative sign.
4.2
Optimization Procedure
From (12) we know that C(u) (? ) in (13) depends on the orthogonal matrix Q. There are
several ways to find the orthogonal matrix that minimizes the objective function. Here we
apply successive Givens rotations to obtain Q. A Givens rotation Q?? is a rotation around
the origin within the plane of two selected components ? and ? and has the matrix form
?
cos(?) for (?, ?) ? {(?, ?) , (?, ?)}
?
?
? sin(?) for (?, ?) ? {(?, ?)}
??
(14)
Q?? :=
?
? sin(?) for (?, ?) ? {(?, ?)}
??? otherwise
with Kronecker symbol ??? and rotation angle ?. Any orthogonal L ? L matrix such
as Q can be written as a product of L(L?1)
(or more) Givens rotation matrices Q?? (for
2
the rotation part) and a diagonal matrix with elements ?1 (for the reflection part). Since
reflections do not matter in our case we only consider the Givens rotations as is often used
in second-order ICA algorithms (see e.g. [11]).
We can therefore write the objective as a function of a Givens rotation Q?? as
?
?2
R
L
X
X
?? (y)
?
? ?
?ISFA (Q?? ) = bICA
Q??
?? Q?? C?? (? )
?,?=1,
?6=?
bSFA
R
X
?=1
?
?
?,?=1
L
X
?,?=1
(y)
?2
??
? .
Q??
?? Q?? C?? (? )
(15)
Assume we want to minimize ?ISFA for a given R, where R denotes the number of signal
components we want to extract. Applying a Givens rotation Q?? we have to distinguish
three cases.
? Case 1: Both axes u? and u? lie inside the subspace spanned by the first R axes
(?, ? ? R). The sum over all squared cross correlations of all signal components
that lie outside the subspace is constant as well as those of all signal components
inside the subspace. There is no interaction between inside and outside, in fact the
objective function is exactly the objective for an ICA algorithm based on secondorder statistics e.g. TDSEP or SOBI [12, 13]. In [10] it has been shown that this
is equivalent to SFA in the case of a single time delay.
? Case 2: Only one axis, w.l.o.g. u? , lies inside the subspace, the other, u? , outside
(? ? R < ?). Since one axis of the rotation plane lies outside the subspace, u? in
the objective function can be optimized at the expense of u?? outside the subspace.
A rotation of ?/2, for instance, would simply exchange components u? and u? .
This gives the possibility to find the slowest and most independent components in
the whole space spanned by all ui and u?j (i = 1, . . . , R, j = R + 1, . . . , L) in
contrast to Case 1 where the minimum is searched within the subspace spanned
by the R components in the objective function.
? Case 3: Both axes lie outside the subspace (R < ?, ?): A Givens rotation with
the two rotation axes outside the relevant subspace does not affect the objective
function and can therefore be disregarded.
It can be shown that like in [14] the objective function (15) as a function of ? can always
be written in the form
???
ISFA (?) = A0 + A2 cos (2? + ?2 ) + A4 cos (4? + ?4 ) ,
(16)
where the second term on the right hand
side vanishes for Case 1. There exists a single
minimum (if w.l.o.g. ? ? ? ?2 , ?2 ) that can easily be calculated (see e.g.[14]). The
derivation of (16) involves various trigonometric identities and, because of its length, is
documented elsewhere1 .
It is important to notice that the rotation planes of the Givens rotations are selected from
the whole L-dimensional space whereas the objective function only uses information of
correlations among the first R signal components ui . Successive application of Givens
rotations Q?? leads to the final rotation matrix Q which is in the ideal case such that
QT C(y) (? ) Q = C(v) (? ) has a diagonal R ? R submatrix C(u) (? ), but it is not clear if
the final minimum is also the global one. However, in various simulations no local minima
have been found.
4.3
Incremental Extracting of Independent Components
It is possible to find the number of independent source signal components R by successively increasing the number of components to be extracted. In each step the objective
function (13) is optimized for a fixed R. First a single signal component is extracted
(R = 1) and then an additional one (R = 2) etc. The algorithm is stopped when no additional signal component can be extracted. As a stopping criterion every suitable measure
of independence can be applied; we used the sum over squared cross-cumulants of fourth
order. In our artificial examples this value is typically small for independent components,
and increases by two orders of magnitudes if the number of components to be extracted is
greater than the number of original source signal components.
1
http://itb.biologie.hu-berlin.de/~blaschke
5
Simulation
Here we show a simple example, with two nonlinearly mixed signal components as shown
in Figure 1. The mixture is defined by
x1 (t) = (s1 (t) + 1) sin (?s2 (t)) ,
x2 (t) = (s1 (t) + 1) cos (?s2 (t)) .
(17)
We used the ISFA algorithm with different nonlinearities (see Tab. 1). Already a nonlinear expansion with monomials up to degree three was sufficient to give good results in
extracting the original source signal (see Fig. 1). In all cases ISFA did find exactly two
independent signal components. A linear BSS method failed completely to find a good
unmixing matrix.
6
Conclusion
We have shown that connecting the ideas of slow feature analysis and independent component analysis into ISFA is a possible way to solve the nonlinear blind source separation
problem. SFA enforces the independent components of ICA to be slowly varying which
seems to be a good way to discriminate between the original and nonlinearly distorted
source signal components. A simple simulation showed that ISFA is able to extract the
original source signal out of a nonlinear mixture. Furthermore ISFA can predict the number of source signal components via an incremental optimization scheme.
Acknowledgments
This work has been supported by the Volkswagen Foundation through a grant to LW for a
junior research group.
References
[1] A. Hyv?rinen and P. Pajunen. Nonlinear independent component analysis: existence
and uniqueness results. Neural Networks, 12(3):429?439, 1999.
[2] C. Jutten and J. Karhunen. Advances in nonlinear blind source separation. In Proc.
of the 4th Int. Symposium on Independent Component Analysis and Blind Signal Separation, Nara, Japan, (ICA 2003), pages 245?256, 2003.
[3] Laurenz Wiskott and Terrence Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4):715?770, 2002.
Table 1: Correlation coefficients of extracted (u1 and u2 ) and original (s1 and s2 ) source
signal components
s1
s2
linear
u1
u2
-0.803 -0.544
0.332
0.517
degree 2
u1
u2
-0.001 -0.978
-0.988 -0.001
degree 3
u1
u2
0.001 0.995
-0.995 0.001
degree 4
u1
u2
0.002 0.995
-0.996 0.000
Correlation coefficients of extracted (u1 and u2 ) and original (s1 and s2 ) source signal components for linear ICA (first column) and ISFA with different nonlinearities (monomials up
to degree 2, 3, and 4). Using monomials up to degree 3 in the nonlinear expansion step
already suffices to extract the original source signal. Note that the source signal can only
be estimated up to permutation and scaling, resulting in different signs and permutations of
the two estimated source signal components.
s1
s2
(a)
x1
x2
(b)
u1
u2
(c)
Figure 1: Waveforms and Scatter-plots of (a) the original source signal components s i ,
(b) the nonlinear mixture, and (c) recovered components with nonlinear ISFA (u i ). As a
nonlinearity we used all monomials up to degree 4.
[4] P. Comon. Independent component analysis, a new concept? Signal Processing,
36(3):287?314, 1994. Special Issue on Higher-Order Statistics.
[5] J.-F. Cardoso and A. Souloumiac. Blind beamforming for non Gaussian signals. IEE
Proceedings-F, 140:362?370, 1993.
[6] T.-W. Lee, M. Girolami, and T.J. Sejnowski. Independent component analysis using
an extended Infomax algorithm for mixed sub-Gaussian and super-Gaussian sources.
Neural Computation, 11(2):409?433, 1999.
[7] A. Hyv?rinen. Fast and robust fixed-point algorithms for independent component
analysis. IEEE Transactions on Neural Networks, 10(3):626?634, 1999.
[8] L. Molgedey and G. Schuster. Separation of a mixture of independent signals using
time delayed correlations. Physical Review Letters, 72(23):3634?3637, 1994.
[9] Lang Tong, Ruey-wen Liu, Victor C. Soon, and Yih-Fang Huang. Indeterminacy and
identifiability of blind identification. IEEE Transactions on Circuits and Systems,
38(5):499?509, may 1991.
[10] T. Blaschke, L. Wiskott, and P. Berkes. What is the relation between independent
component analysis and slow feature analysis? (in preparation), 2004.
[11] Jean-Fran?ois Cardoso and Antoine Souloumiac. Jacobi angles for simultaneous diagonalization. SIAM J. Mat. Anal. Appl., 17(1):161?164, 1996.
[12] A. Ziehe and K.-R. M?ller. TDSEP ? an efficient algorithm for blind separation using
time structure. In Proc. of the 8th Int. Conference on Artificial Neural Networks
(ICANN?98), pages 675 ? 680, Berlin, 1998. Springer Verlag.
[13] Adel Belouchrani, Karim Abed Meraim, Jean-Fran?ois Cardoso, and ?ric Moulines.
A blind source separation technique based on second order statistics. IEEE Transactions on Signal Processing, 45(2):434?44, 1997.
[14] T. Blaschke and L. Wiskott. CuBICA: Independent component analysis by simultaneous third- and fourth-order cumulant diagonalization. IEEE Transactions on Signal
Processing, 52(5):1250?1256, 2004.
| 2736 |@word seems:1 hyv:2 hu:3 simulation:3 covariance:3 volkswagen:1 yih:1 carry:1 liu:1 interestingly:1 recovered:1 si:1 scatter:1 lang:1 written:4 remove:1 plot:1 selected:3 plane:3 short:2 provides:2 successive:3 symposium:1 consists:1 combine:3 inside:4 ica:26 moulines:1 laurenz:2 increasing:1 underlying:1 circuit:1 isfa:16 what:1 minimizes:1 sfa:16 finding:3 transformation:3 guarantee:1 temporal:2 every:1 exactly:2 rm:1 unit:2 grant:1 understood:1 local:1 tends:2 solely:1 equivalence:2 appl:1 co:5 statistically:4 acknowledgment:1 enforces:1 procedure:1 integrating:2 onto:1 applying:1 equivalent:1 map:1 demonstrated:1 maximizing:1 pure:1 spanned:3 regularize:1 fang:1 searching:1 rinen:2 us:1 origin:1 secondorder:1 element:1 observed:1 solved:2 vanishes:1 ui:10 tobias:1 molgedey:2 basis:1 completely:1 easily:1 various:2 derivation:1 fast:1 sejnowski:2 artificial:2 outside:7 jean:2 solve:2 otherwise:1 statistic:4 g1:1 final:2 propose:1 interaction:1 product:1 relevant:1 combining:1 mixing:2 trigonometric:1 achieve:2 tdsep:2 produce:1 unmixing:1 incremental:2 qt:1 indeterminacy:2 solves:2 strong:1 ois:2 involves:1 girolami:1 waveform:1 humboldt:2 require:1 exchange:1 suffices:1 extension:1 hold:1 around:1 mapping:3 predict:2 a2:1 uniqueness:1 proc:2 bridge:1 instantaneously:1 weighted:1 minimization:2 always:1 gaussian:3 super:1 varying:15 ax:4 focus:1 slowest:1 contrast:2 dependent:1 stopping:1 squaring:1 typically:3 a0:1 relation:1 sobi:1 germany:2 issue:1 among:1 special:1 biology:2 unsupervised:2 wen:1 delayed:3 possibility:1 mixture:12 orthogonal:7 theoretical:2 stopped:1 instance:1 column:1 cumulants:1 maximization:1 entry:2 monomials:6 delay:3 gr:1 iee:1 combined:1 siam:1 lee:1 terrence:1 invertible:2 infomax:1 connecting:1 quickly:4 squared:2 successively:2 huang:1 slowly:11 leading:1 unmix:1 japan:1 nonlinearities:3 de:3 sinusoidal:1 int:2 matter:1 coefficient:2 blind:12 depends:1 try:1 tab:1 recover:2 identifiability:1 slope:1 pajunen:1 minimize:2 square:1 variance:3 identification:1 meraim:1 simultaneous:2 decorrelated:1 frequency:1 obvious:1 jacobi:1 recovers:1 higher:1 furthermore:1 correlation:6 hand:1 nonlinear:25 jutten:1 building:1 concept:2 read:1 karim:1 sin:4 criterion:1 reflection:2 qwx:1 instantaneous:1 rotation:17 rl:1 physical:1 thirdly:1 refer:1 nonlinearity:1 operating:1 whitening:4 etc:1 berkes:1 showed:1 optimizing:1 slowness:4 verlag:1 yi:1 victor:1 minimum:4 additional:4 greater:1 ller:1 signal:57 cross:2 nara:1 divided:1 whitened:6 achieved:3 qy:1 whereas:2 want:2 source:38 unlike:1 subject:2 induced:1 beamforming:1 extracting:3 ideal:1 variety:1 independence:8 xj:1 affect:1 idea:2 adel:1 reformulated:1 speech:1 prefers:1 clear:1 cardoso:3 blaschke:5 documented:1 http:1 notice:2 sign:2 estimated:9 write:2 mat:1 group:1 ht0:2 sum:2 angle:2 letter:1 fourth:2 distorted:1 separation:11 fran:2 prefer:1 scaling:1 ric:1 submatrix:1 distinguish:1 vectorial:2 constraint:4 kronecker:1 x2:3 u1:8 performing:2 expanded:6 ruey:1 combination:1 smaller:1 ur:1 abed:1 s1:7 comon:1 intuitively:3 explained:1 mutually:2 diagonalizes:1 needed:1 know:2 adopted:1 apply:2 symmetrized:1 existence:1 original:10 denotes:2 ensure:1 a4:1 const:1 uj:1 objective:22 already:5 diagonal:3 antoine:1 subspace:10 berlin:8 trivial:1 reason:1 length:1 minimizing:1 expense:2 negative:1 anal:1 unknown:1 extended:1 biologie:3 introduced:2 nonlinearly:6 junior:1 optimized:2 able:1 usually:3 xm:3 summarize:1 wz:2 suitable:1 decorrelation:1 scheme:1 x2i:1 axis:2 extract:5 auto:1 sn:1 review:1 permutation:2 mixed:3 itb:1 foundation:1 degree:9 sufficient:3 wiskott:5 principle:2 uncorrelated:2 belouchrani:1 supported:1 free:1 soon:1 side:1 understand:1 institute:2 correspondingly:1 bs:13 xn:2 dimension:2 calculated:1 souloumiac:2 transaction:4 observable:1 dealing:1 global:1 assumed:1 xi:1 search:1 why:1 table:1 robust:1 expansion:3 did:1 icann:1 whole:2 s2:6 x1:8 fig:1 slow:12 tong:1 sub:1 lie:5 lw:1 third:1 symbol:1 incorporating:1 exists:2 adding:1 hui:2 diagonalization:2 magnitude:1 karhunen:1 disregarded:1 gap:1 easier:1 simply:2 failed:1 doubling:1 u2:7 springer:1 extracted:6 identity:1 called:1 discriminate:1 invariance:1 ziehe:1 searched:1 cumulant:1 preparation:1 schuster:2 |
1,912 | 2,737 | Methods for Estimating the Computational
Power and Generalization Capability of Neural
Microcircuits
Wolfgang Maass, Robert Legenstein, Nils Bertschinger
Institute for Theoretical Computer Science
Technische Universit?at Graz
A-8010 Graz, Austria
{maass, legi, nilsb}@igi.tugraz.at
Abstract
What makes a neural microcircuit computationally powerful? Or more
precisely, which measurable quantities could explain why one microcircuit C is better suited for a particular family of computational tasks than
another microcircuit C 0 ? We propose in this article quantitative measures
for evaluating the computational power and generalization capability of a
neural microcircuit, and apply them to generic neural microcircuit models drawn from different distributions. We validate the proposed measures by comparing their prediction with direct evaluations of the computational performance of these microcircuit models. This procedure is
applied first to microcircuit models that differ with regard to the spatial
range of synaptic connections and with regard to the scale of synaptic
efficacies in the circuit, and then to microcircuit models that differ with
regard to the level of background input currents and the level of noise
on the membrane potential of neurons. In this case the proposed method
allows us to quantify differences in the computational power and generalization capability of circuits in different dynamic regimes (UP- and
DOWN-states) that have been demonstrated through intracellular recordings in vivo.
1
Introduction
Rather than constructing particular microcircuit models that carry out particular computations, we pursue in this article a different strategy, which is based on the assumption that
the computational function of cortical microcircuits is not fully genetically encoded, but
rather emerges through various forms of plasticity (?learning?) in response to the actual
distribution of signals that the neural microcircuit receives from its environment. From this
perspective the question about the computational function of cortical microcircuits C turns
into the questions:
a) What functions (i.e. maps from circuit inputs to circuit outputs) can the circuit C
learn to compute.
b) How well can the circuit C generalize a specific learned computational function
to new inputs?
We propose in this article a conceptual framework and quantitative measures for the investigation of these two questions. In order to make this approach feasible, in spite of
numerous unknowns regarding synaptic plasticity and the distribution of electrical and biochemical signals impinging on a cortical microcircuit, we make in the present first step of
this approach the following simplifying assumptions:
1. Particular neurons (?readout neurons?) learn via synaptic plasticity to extract specific
information encoded in the spiking activity of neurons in the circuit.
2. We assume that the cortical microcircuit itself is highly recurrent, but that the impact of
feedback that a readout neuron might send back into this circuit can be neglected.1
3. We assume that synaptic plasticity of readout neurons enables them to learn arbitrary
linear transformations. More precisely, we assume that the input to such readout neuron
Pn?1
can be approximated by a term i=1 wi xi (t), where n ? 1 is the number of presynaptic
neurons, xi (t) results from the output spike train of the ith presynaptic neuron by filtering
it according to the low-pass filtering property of the membrane of the readout neuron,2 and
wi is the efficacy of the synaptic connection. Thus wi xi (t) models the time course of the
contribution of previous spikes from the ith presynaptic neuron to the membrane potential
at the soma of this readout neuron. We will refer to the vector x(t) as the circuit state at
time t.
Under these unpleasant but apparently unavoidable simplifying assumptions we propose
new quantitative criteria based on rigorous mathematical principles for evaluating a neural
microcircuit C with regard to questions a) and b). We will compare in sections 4 and 5
the predictions of these quantitative measures with the actual computational performance
achieved by 132 different types of neural microcircuit models, for a fairly large number of
different computational tasks. All microcircuit models that we consider are based on biological data for generic cortical microcircuits (as described in section 3), but have different
settings of their parameters.
2
Measures for the kernel-quality and generalization capability of
neural microcircuits
One interesting measure for probing the computational power of a neural circuit is the pairwise separation property considered in [Maass et al., 2002]. This measure tells us to what
extent the current circuit state x(t) reflects details of the input stream that occurred some
time back in the past (see Fig. 1). Both circuit 2 and circuit 3 could be described as being
chaotic since state differences resulting from earlier input differences persist. The ?edge-ofchaos? [Langton, 1990] lies somewhere between points 1 and 2 according to Fig. 1c). But
the best computational performance occurs between points 2 and 3 (see Fig. 2b)). Hence
the ?edge-of-chaos? is not a reliable predictor of computational power for circuits of spiking neurons. In addition, most real-world computational tasks require that the circuit gives
a desired output not just for 2, but for a fairly large number m of significantlydifferent
inputs. One could of course test whether a circuit C can separate each of the m
2 pairs of
1
This assumption is best justified if such readout neuron is located for example in another brain
area that receives massive input from many neurons in this microcircuit and only has diffuse backwards projection. But it is certainly problematic and should be addressed in future elaborations of the
present approach.
2
One can be even more realistic and filter it also by a model for the short term dynamics of the
synapse into the readout neuron, but this turns out to make no difference for the analysis proposed in
this article.
8
b
4
W
scale
2
3
1
0.7
0.5
0.3
2
1
0.1
0.05
0.5
1 1.4 2
?
3 4
6 8
4 state separation
7
2
6
0
5
0.2
4
0.1
c
circuit 3
3
0
2
0.1
1
0.05
0
0
2
3
circuit 2
0
0.25
0.2
1
1
2
3
state separation
a
0.15
0.1
0.05
circuit 1
0
1
2
t [s]
3
0
1.4
1.6
1.8
?
2
2.2
Figure 1: Pointwise separation property for different types of neural microcircuit models as specified
in section 3. Each circuit C was tested for two arrays u and v of 4 input spike trains at 20 Hz over
3 s that differed only during the first second. a) Euclidean differences between resulting circuit
states xu (t) and xv (t) for t = 3 s, averaged over 20 circuits C and 20 pairs u, v for each indicated
value of ? and Wscale (see section 3). b) Temporal evolution of k xu (t) ? xv (t) k for 3 different
circuits with values of ?, Wscale according to the 3 points marked in panel a) (? = 1.4, 2, 3 and
Wscale = 0.3, 0.7, 2 for circuit 1, 2, and 3 respectively). c) Pointwise separation along a straight line
between point 1 and point 2 of panel a).
such inputs. But even if the circuit can do this, we do not know whether a neural readout
from such circuit would be able to produce given target outputs for these m inputs.
Therefore we propose here the linear separation property as a more suitable quantitative
measure for evaluating the computational power of a neural microcircuit (or more precisely:
the kernel-quality of a circuit; see below). To evaluate the linear separation property of a
circuit C for m different inputs u1 , . . . , um (which are in this article always functions of
time, i.e. input streams such as for example multiple spike trains) we compute the rank of
the n ? m matrix M whose columns are the circuit states xui (t0 ) resulting at some fixed
time t0 for the preceding input stream ui . If this matrix has rank m, then it is guaranteed
that any given assignment of target outputs yi ? R at time t0 for the inputs ui can be
implemented by this circuit C (in combination with a linear readout). In particular, each of
the 2m possible binary classifications of these m inputs can then be carried out by a linear
readout from this fixed circuit C. Obviously such insight is much more informative than a
demonstration that some particular classification task can be carried out by such circuit C.
If the rank of this matrix M has a value r < m, then this value r can still be viewed as a
measure for the computational power of this circuit C, since r is the number of ?degrees
of freedom? that a linear readout has in assigning target outputs yi to these inputs ui (in
a way which can be made mathematically precise with concepts of linear algebra). Note
that this rank-measure for the linear separation property of a circuit C may be viewed as an
empirical measure for its kernel-quality, i.e. for the complexity and diversity of nonlinear
operations carried out by C on its input stream in order to boost the classification power of
a subsequent linear decision-hyperplane (see [Vapnik, 1998]).
Obviously the preceding measure addresses only one component of the computational performance of a neural circuit C. Another component is its capability to generalize a learnt
computational function to new inputs. Mathematical criteria for generalization capability
are derived in [Vapnik, 1998] (see ch. 4 of [Cherkassky and Mulier, 1998] for a compact account of results relevant for our arguments). According to this mathematical theory one can
quantify the generalization capability of any learning device in terms of the VC-dimension
of the class H of hypotheses that are potentially used by that learning device.3 More pre3
The VC-dimension (of a class H of maps H from some universe Suniv of inputs into {0, 1})
is defined as the size of the largest subset S ? Suniv which can be shattered by H. One says that
S ? Suniv is shattered by H if for every map f : S ? {0, 1} there exists a map H in H such that
H(u) = f (u) for all u ? S (this means that every possible binary classification of the inputs u ? S
cisely: if VC-dimension (H) is substantially smaller than the size of the training set Strain ,
one can prove that this learning device generalizes well, in the sense that the hypothesis (or
input-output map) produced by this learning device is likely to have for new examples an
error rate which is not much higher than its error rate on Strain , provided that the new
examples are drawn from the same distribution as the training examples (see equ. 4.22 in
[Cherkassky and Mulier, 1998]).
We apply this mathematical framework to the class HC of all maps from a set Suniv of
inputs u into {0, 1} which can be implemented by a circuit C. More precisely: HC consists
of all maps from Suniv into {0, 1} that a linear readout from circuit C with fixed internal
parameters (weights etc.) but arbitrary weights w ? Rn of the readout (that classifies the
circuit input u as belonging to class 1 if w ? xu (t0 ) ? 0, and to class 0 if w ? xu (t0 ) < 0)
could possibly implement.
Whereas it is very difficult to achieve tight theoretical bounds for the VC-dimension of even
much simpler neural circuits, see [Bartlett and Maass, 2003], one can efficiently estimate
the VC-dimension of the class HC that arises in our context for some finite ensemble Suniv
of inputs (that contains all examples used for training or testing) by using the following
mathematical result (which can be proved with the help of Radon?s Theorem):
Theorem 2.1 Let r be the rank of the n ? s matrix consisting of the s vectors xu (t0 )
for all inputs u in Suniv (we assume that Suniv is finite and contains s inputs). Then
r ? VC-dimension(HC ) ? r + 1.
We propose to use the rank r defined in Theorem 2.1 as an estimate of VC-dimension(HC ),
and hence as a measure that informs us about the generalization capability of a neural
microcircuit C. It is assumed here that the set Suniv contains many noisy variations
of the same input signal, since otherwise learning with a randomly drawn training set
Strain ? Suniv has no chance to generalize to new noisy variations. Note that each family
of computational tasks induces a particular notion of what aspects of the input are viewed
as noise, and what input features are viewed as signals that carry information which is relevant for the target output for at least one of these computational tasks. For example for
computations on spike patterns some small jitter in the spike timing is viewed as noise. For
computations on firing rates even the sequence of interspike intervals and temporal relations between spikes that arrive from different input sources are viewed as noise, as long
as these input spike trains represent the same firing rates. Examples for both families of
computational tasks will be discussed in this article.
3
Models for generic cortical microcircuits
We test the validity of the proposed measures by comparing their predictions with direct
evaluations of the computational performance for a large variety of models for generic cortical microcircuits consisting of 540 neurons. We used leaky-integrate-and-fire neurons4
and biologically quite realistic models for dynamic synapses.5 Neurons (20 % of which
were randomly chosen to be inhibitory) were located on the grid points of a 3D grid of
dimensions 6 ? 6 ? 15 with edges of unit length. The probability of a synaptic connection
can be carried out by some hypothesis H in H).
4
Membrane voltage Vm modeled by ?m dVdtm = ?(Vm ?Vresting )+Rm ?(Isyn (t)+Ibackground +
Inoise ), where ?m = 30 ms is the membrane time constant, Isyn models synaptic inputs from other
neurons in the circuits, Ibackground models a constant unspecific background input and Inoise models
noise in the input.
5
Short term synaptic dynamics was modeled according to [Markram et al., 1998], with distributions of synaptic parameters U (initial release probability), D (time constant for depression), F (time
constant for facilitation) chosen to reflect empirical data (see [Maass et al., 2002] for details).
from neuron a to neuron b was proportional to exp(?D2 (a, b)/?2 ), where D(a, b) is the
Euclidean distance between a and b, and ? regulates the spatial scaling of synaptic connectivity. Synaptic efficacies w were chosen randomly from distributions that reflect biological
data (as in [Maass et al., 2002]), with a common scaling factor Wscale .
8
b
0.7
4
2
W
scale
a
0
50
100 150
t [ms]
200
0
50
100 150
t [ms]
200
1
0.7
0.5
0.3
3
0.65
2
1
0.6
0.1
0.05
0.5
1 1.4 2
?
3 4
6 8
Figure 2: Performance of different types of neural microcircuit models for classification of spike
patterns. a) In the top row are two examples of the 80 spike patterns that were used (each consisting of
4 Poisson spike trains at 20 Hz over 200 ms), and in the bottom row are examples of noisy variations
(Gaussian jitter with SD 10 ms) of these spike patterns which were used as circuit inputs. b) Fraction
of examples (for 200 test examples) that were correctly classified by a linear readout (trained by
linear regression with 500 training examples). Results are shown for 90 different types of neural
microcircuits C with ? varying on the x-axis and Wscale on the y-axis (20 randomly drawn circuits
and 20 target classification functions randomly drawn from the set of 280 possible classification
functions were tested for each of the 90 different circuit types, and resulting correctness-rates were
averaged. The mean SD of the results is 0.028.). Points 1, 2, 3 defined as in Fig. 1.
Linear readouts from circuits with n ? 1 neurons were assumed to compute a weighted
Pn?1
sum i=1 wi xi (t) + w0 (see section 1). In order to simplify notation we assume that the
vector x(t) contains an additional constant component x0 (t) = 1, so that one can write
Pn?1
w ? x(t) instead of i=1 wi xi (t) + w0 . In the case of classification tasks we assume that
the readout outputs 1 if w ? x(t) ? 0, and 0 otherwise.
4
Evaluating the influence of synaptic connectivity on computational
performance
Neural microcircuits were drawn from the distribution described in section 3 for 10 different values of ? (which scales the number and average distance of synaptically connected
neurons) and 9 different values of Wscale (which scales the efficacy of all synaptic connections). 20 microcircuit models C were drawn for each of these 90 different assignments
of values to ? and Wscale . For each circuit a linear readout was trained to perform one
(randomly chosen) out of 280 possible classification tasks on noisy variations u of 80 fixed
spike patterns as circuit inputs u. The target performance of any such circuit input was to
output at time t = 100 ms the class (0 or 1) of the spike pattern from which the preceding
circuit input had been generated (for some arbitrary partition of the 80 fixed spike patterns
into two classes. Each spike pattern u consisted of 4 Poisson spike trains over 200 ms. Performance results are shown in Fig. 2b for 90 different types of neural microcircuit models.
We now test the predictive quality of the two proposed measures for the computational
power of a microcircuit on spike patterns. One should keep in mind that the proposed
measures do not attempt to test the computational capability of a circuit for one particular computational task, but for any distribution on Suniv and for a very large (in general
infinitely large) family of computational tasks that only have in common a particular bias
regarding which aspects of the incoming spike trains may carry information that is relevant
for the target output of computations, and which aspects should be viewed as noise. Fig. 3a
explains why the lower left part of the parameter map in Fig. 2b is less suitable for any
Wscale
a
8
450
4
b
8
400
2
1
0.7
0.5
0.3
400
2
1
0.7
0.5
0.3
350
1
0.7
0.5
0.3
350
0.1
200
300
250
250
0.1
0.05
0.5
200
1 1.4 2
?
3 4
6 8
8
450
2
300
c
4
0.05
0.5
1 1.4 2
?
3 4
6 8
20
4
3
15
2
10
1
5
0.1
0.05
0.5
0
1 1.4 2
?
3 4
6 8
Figure 3: Values of the proposed measures for computations on spike patterns. a) Kernel-quality
for spike patterns of 90 different circuit types (average over 20 circuits, mean SD = 13; For each
circuit, the average over 5 different sets of spike patterns was used).6 b) Generalization capability for
spike patterns: estimated VC-dimension of HC (for a set Suniv of inputs u consisting of 500 jittered
versions of 4 spike patterns), for 90 different circuit types (average over 20 circuits, mean SD = 14;
For each circuit, the average over 5 different sets of spike patterns was used). c) Difference of both
measures (mean SD = 5.3). This should be compared with actual computational performance plotted
in Fig. 2b. Points 1, 2, 3 defined as in Fig. 1.
such computation, since there the kernel-quality of the circuits is too low. Fig. 3b explains
why the upper right part of the parameter map in Fig. 2b is less suitable, since a higher
VC-dimension (for a training set of fixed size) entails poorer generalization capability. We
are not aware of a theoretically founded way of combining both measures into a single
value that predicts overall computational performance. But if one just takes the difference
of both measures then the resulting number (see Fig. 3c) predicts quite well which types of
neural microcircuit models perform well for the particular computational tasks considered
in Fig. 2b.
5
Evaluating the computational power of neural microcircuit models
in UP- and DOWN-states
Data from numerous intracellular recordings suggest that neural circuits in vivo switch between two different dynamic regimes that are commonly referred to as UP- and DOWN
states. UP-states are characterized by a bombardment with synaptic inputs from recurrent
activity in the circuit, resulting in a membrane potential whose average value is significantly closer to the firing threshold, but also has larger variance. We have simulated these
different dynamic regimes by varying the background current Ibackground and the noise
current Inoise . Fig. 4a shows that one can simulate in this way different dynamic regimes
of the same circuit where the time course of the membrane potential qualitatively matches
data from intracellular recordings in UP- and DOWN-states (see e.g. [Shu et al., 2003]).
We have tested the computational performance of circuits in 42 different dynamic regimes
(for 7 values of Ibackground and 6 values of Inoise ) with 3 complex nonlinear computations
on firing rates of circuit inputs.7 Inputs u consisted of 4 Poisson spike trains with timevarying rates (drawn independently every 30 ms from the interval of 0 to 80 Hz for the first
two and the second two of 4 input spike trains, see middle row of Fig. 4a for a sample).
Let f1 (t) (f2 (t)) be the actual sum of rates normalized to the interval [0, 1] for the first
6
The rank of the matrix consisting of 500 circuit states xu (t) for t = 200 ms was computed for
500 spike patterns over 200 ms as described in section 2, see Fig. 2a.
7
Computations on firing rates were chosen as benchmark tasks both because UP states were conjectured to enhance the performance for such tasks, and because we want to show that the proposed
measures are applicable to other types of computational tasks than those considered in section 4.
UP?state
Vm [mV]
16
a
100
14
50
12
0
Vm [mV]
16
DOWN?state
100
14
50
12
300
Inoise
b
350
c
10
70
6
4.5
3.2
1.9
400
t [ms]
UP
DOWN
450
500
50
350
400
t [ms]
d
10
120
6
4.5
3.2
60
0
80
0.2
0.15
0.1
1.9
60
40
0.05
40
30
0.6
11.5 12 12.5
Inoise
e
0.6
11.5 12 12.5
13.5 14.3
f
10
6
4.5
3.2
1.9
0.7
0.6
20
6
4.5
3.2
1.9
0.5
0.6
11.5 12 12.5
13.5 14.3
Ibackground
0.6
11.5 12 12.5
13.5 14.3
g
10
0.25
0.2
0.15
0
13.5 14.3
10
6
4.5
3.2
1.9
0.1
0.6
11.5 12 12.5
13.5 14.3
Ibackground
500
10
6
4.5
3.2
100
1.9
450
0.3
0.25
0.2
0.6
11.5 12 12.5
13.5 14.3
Ibackground
Figure 4: Analysis of the computational power of simulated neural microcircuits in different dynamic regimes. a) Membrane potential (for a firing threshold of 15 mV) of two randomly selected
neurons from circuits in the two parameter regimes marked in panel b), as well as spike rasters for
the same two parameter regimes (with the actual circuit inputs shown between the two rows). b)
Estimates of the kernel-quality for input streams u with 34 different combinations of firing rates from
0, 20, 40 Hz in the 4 input spike trains (mean SD = 12). c) Estimate of the VC-dimension for a set
Suniv of inputs consisting of 200 different spike trains u that represent 2 different combinations of
firing rates (mean SD = 4.6). d) Difference of measures from panels b and c (after scaling each linearly into a common range [0,1]). e), f), g): Evaluation of the computational performance (correlation
coefficient; all for test data; mean SD is 0.06, 0.04, and 0.03 for panels e), f), and g) respectively.) of
the same circuits in different dynamic regimes for computations involving multiplication and absolute value of differences of firing rates (see text). The theoretically predicted parameter regime with
good computational performance for any computations on firing rates (see panel d) agrees quite well
with the intersection of areas with good computational performance in panels e, f, g.
two (second two) input spike trains computed from the time interval [t ? 30ms, t]. The
computational tasks considered in Fig. 4 were to compute online (and in real-time) every
30 ms the functions f1 (t) ? f2 (t) (see panel e), to decide whether the value of the product
f1 (t) ? f2 (t) lies in the interval [0.1, 0.3] or lies outside of this interval (see panel f), and to
decide whether the absolute value of the difference f1 (t) ? f2 (t) is greater than 0.25 (see
panel g).
We wanted to test whether the proposed measures for computational power and generalization capability were able to make reasonable predictions for this completely different
parameter map, and for computations on firing rates instead of spike patterns. It turns
out that also in this case the kernel-quality (Fig. 4b) explains why circuits in the dynamic
regime corresponding to the left-hand side of the parameter map have inferior computational power for all three computations on firing rates (see Fig. 4 e,f,g). The VC-dimension
(Fig. 4c) explains the decline of computational performance in the right part of the parameter map. The difference of both measures (Fig. 4d) predicts quite well the dynamic
regime where high performance is achieved for all three computational tasks considered in
Fig. 4 e,f,g. Note that Fig. 4e has high performance in the upper right corner, in spite of a
very high VC-dimension. This could be explained by the inherent bias of linear readouts
to compute smooth functions on firing rates, which fits particularly well to this particular
target output.
If one estimates kernel-quality and VC-dimension for the same circuits, but for computations on sparse spike patterns (for an input ensemble Suniv similarly as in section 4), one
finds that circuits at the lower left corner of this parameter map (corresponding to DOWNstates) are predicted to have better computational performance for these computations on
sparse input. This agrees quite well with direct evaluations of computational performance
(not shown). Hence the proposed quantitative measures may provide a theoretical foundation for understanding the computational function of different states of neural activity.
6
Discussion
We have proposed a new method for understanding why one neural microcircuit C is computationally more powerful than another neural microcircuit C 0 . This method is in principle
applicable not just to circuit models, but also to neural microcircuits in vivo and in vitro.
Here it can be used to analyze (for example by optical imaging) for which family of computational tasks a particular microcircuit in a particular dynamic regime is well-suited. The
main assumption of the method is that (approximately) linear readouts from neural microcircuits have the task to produce the actual outputs of specific computations. We are not
aware of specific theoretically founded rules for choosing the sizes of the ensembles of
inputs for which the kernel-measure and the VC-dimension are to be estimated. Obviously
both have to be chosen sufficiently large so that they produce a significant gradient over the
parameter map under consideration (taking into account that their maximal possible value
is bounded by the circuit size). To achieve theoretical guarantees for the performance of
the proposed predictor of the generalization capability of a neural microcircuit one should
apply it to a relatively large ensemble Suniv of circuit inputs (and the dimension n of circuit states should be even larger). But the computer simulations of 132 types of neural
microcircuit models that were discussed in this article suggest that practically quite good
prediction can already be achieved for a much smaller ensemble of circuit inputs.
Acknowledgment: The work was partially supported by the Austrian Science Fund FWF,
project # P15386, and PASCAL project # IST2002-506778 of the European Union.
References
[Bartlett and Maass, 2003] Bartlett, P. L. and Maass, W. (2003). Vapnik-Chervonenkis dimension of
neural nets. In Arbib, M. A., editor, The Handbook of Brain Theory and Neural Networks, pages
1188?1192. MIT Press (Cambridge), 2nd edition.
[Cherkassky and Mulier, 1998] Cherkassky, V. and Mulier, F. (1998). Learning from Data. Wiley,
New York.
[Langton, 1990] Langton, C. G. (1990). Computation at the edge of chaos. Physica D, 42:12?37.
[Maass et al., 2002] Maass, W., Natschl?ager, T., and Markram, H. (2002). Real-time computing
without stable states: A new framework for neural computation based on perturbations. Neural
Computation, 14(11):2531?2560.
[Markram et al., 1998] Markram, H., Wang, Y., and Tsodyks, M. (1998). Differential signaling via
the same axon of neocortical pyramidal neurons. PNAS, 95:5323?5328.
[Shu et al., 2003] Shu, Y., Hasenstaub, A., and McCormick, D. A. (2003). Turning on and off recurrent balanced cortical activity. Nature, 103:288?293.
[Vapnik, 1998] Vapnik, V. N. (1998). Statistical Learning Theory. John Wiley (New York).
| 2737 |@word middle:1 version:1 nd:1 d2:1 simulation:1 simplifying:2 carry:3 initial:1 contains:4 efficacy:4 chervonenkis:1 past:1 current:4 comparing:2 assigning:1 john:1 subsequent:1 realistic:2 informative:1 plasticity:4 interspike:1 enables:1 partition:1 wanted:1 fund:1 selected:1 device:4 ith:2 short:2 mulier:4 simpler:1 mathematical:5 along:1 direct:3 differential:1 prove:1 consists:1 theoretically:3 x0:1 pairwise:1 brain:2 actual:6 provided:1 estimating:1 classifies:1 notation:1 circuit:73 panel:10 bounded:1 project:2 what:5 pursue:1 substantially:1 transformation:1 guarantee:1 temporal:2 quantitative:6 every:4 legi:1 universit:1 um:1 rm:1 unit:1 timing:1 xv:2 sd:8 firing:13 approximately:1 might:1 range:2 averaged:2 acknowledgment:1 testing:1 union:1 implement:1 chaotic:1 ibackground:7 signaling:1 procedure:1 area:2 empirical:2 significantly:2 projection:1 spite:2 suggest:2 context:1 influence:1 measurable:1 map:14 demonstrated:1 send:1 independently:1 insight:1 rule:1 array:1 facilitation:1 notion:1 variation:4 target:8 massive:1 hypothesis:3 approximated:1 particularly:1 located:2 persist:1 predicts:3 bottom:1 electrical:1 wang:1 tsodyks:1 graz:2 readout:20 connected:1 balanced:1 environment:1 ui:3 complexity:1 dynamic:13 neglected:1 trained:2 tight:1 ist2002:1 algebra:1 predictive:1 f2:4 completely:1 various:1 train:12 tell:1 outside:1 choosing:1 whose:2 encoded:2 quite:6 larger:2 say:1 otherwise:2 itself:1 noisy:4 online:1 obviously:3 sequence:1 net:1 propose:5 product:1 maximal:1 relevant:3 combining:1 achieve:2 validate:1 produce:3 help:1 recurrent:3 informs:1 implemented:2 predicted:2 quantify:2 differ:2 xui:1 filter:1 vc:14 explains:4 require:1 f1:4 generalization:11 investigation:1 biological:2 mathematically:1 physica:1 practically:1 sufficiently:1 considered:5 exp:1 applicable:2 largest:1 agrees:2 correctness:1 reflects:1 weighted:1 mit:1 always:1 gaussian:1 rather:2 pn:3 varying:2 voltage:1 timevarying:1 unspecific:1 derived:1 release:1 rank:7 rigorous:1 sense:1 biochemical:1 shattered:2 relation:1 overall:1 classification:9 pascal:1 spatial:2 fairly:2 aware:2 future:1 simplify:1 inherent:1 randomly:7 consisting:6 fire:1 attempt:1 freedom:1 highly:1 evaluation:4 certainly:1 poorer:1 edge:4 closer:1 ager:1 euclidean:2 desired:1 plotted:1 theoretical:4 column:1 earlier:1 hasenstaub:1 assignment:2 technische:1 subset:1 bombardment:1 predictor:2 too:1 learnt:1 jittered:1 vm:4 off:1 enhance:1 connectivity:2 reflect:2 unavoidable:1 possibly:1 langton:3 corner:2 account:2 potential:5 diversity:1 coefficient:1 igi:1 mv:3 stream:5 wolfgang:1 apparently:1 analyze:1 capability:13 vivo:3 contribution:1 variance:1 efficiently:1 ensemble:5 generalize:3 produced:1 straight:1 classified:1 explain:1 synapsis:1 inoise:6 synaptic:15 raster:1 proved:1 austria:1 emerges:1 back:2 higher:2 response:1 synapse:1 microcircuit:42 just:3 correlation:1 hand:1 receives:2 nonlinear:2 quality:9 indicated:1 validity:1 concept:1 consisted:2 normalized:1 evolution:1 hence:3 maass:10 during:1 inferior:1 criterion:2 m:14 neocortical:1 chaos:2 consideration:1 common:3 spiking:2 vitro:1 regulates:1 discussed:2 occurred:1 refer:1 significant:1 cambridge:1 grid:2 similarly:1 had:1 stable:1 entail:1 etc:1 perspective:1 conjectured:1 isyn:2 binary:2 yi:2 additional:1 greater:1 preceding:3 signal:4 multiple:1 pnas:1 smooth:1 match:1 characterized:1 long:1 elaboration:1 impact:1 prediction:5 involving:1 regression:1 austrian:1 poisson:3 kernel:9 represent:2 achieved:3 synaptically:1 justified:1 background:3 addition:1 whereas:1 want:1 addressed:1 interval:6 pyramidal:1 source:1 natschl:1 recording:3 hz:4 fwf:1 backwards:1 variety:1 switch:1 fit:1 arbib:1 regarding:2 decline:1 t0:6 whether:5 bartlett:3 york:2 depression:1 induces:1 problematic:1 inhibitory:1 estimated:2 correctly:1 write:1 soma:1 threshold:2 drawn:8 imaging:1 fraction:1 sum:2 powerful:2 jitter:2 arrive:1 family:5 reasonable:1 decide:2 separation:8 legenstein:1 decision:1 scaling:3 radon:1 bound:1 guaranteed:1 activity:4 precisely:4 diffuse:1 u1:1 aspect:3 argument:1 simulate:1 optical:1 relatively:1 according:5 combination:3 belonging:1 membrane:8 smaller:2 wi:5 biologically:1 explained:1 computationally:2 turn:3 know:1 mind:1 generalizes:1 operation:1 apply:3 generic:4 top:1 tugraz:1 somewhere:1 question:4 quantity:1 spike:34 occurs:1 strategy:1 already:1 gradient:1 distance:2 separate:1 simulated:2 w0:2 presynaptic:3 extent:1 length:1 pointwise:2 modeled:2 demonstration:1 difficult:1 robert:1 potentially:1 shu:3 unknown:1 perform:2 mccormick:1 upper:2 neuron:25 benchmark:1 finite:2 precise:1 strain:3 rn:1 perturbation:1 arbitrary:3 pair:2 specified:1 connection:4 learned:1 boost:1 address:1 able:2 below:1 pattern:18 regime:13 genetically:1 reliable:1 power:13 suitable:3 turning:1 numerous:2 axis:2 carried:4 extract:1 text:1 understanding:2 multiplication:1 fully:1 interesting:1 filtering:2 proportional:1 foundation:1 integrate:1 degree:1 article:7 principle:2 editor:1 row:4 course:3 supported:1 bias:2 side:1 institute:1 taking:1 markram:4 absolute:2 leaky:1 sparse:2 regard:4 feedback:1 dimension:17 cortical:8 evaluating:5 world:1 made:1 commonly:1 qualitatively:1 founded:2 compact:1 keep:1 incoming:1 handbook:1 conceptual:1 assumed:2 equ:1 xi:5 why:5 learn:3 nature:1 hc:6 complex:1 european:1 constructing:1 impinging:1 main:1 intracellular:3 universe:1 linearly:1 noise:7 edition:1 xu:6 fig:23 referred:1 differed:1 probing:1 wiley:2 axon:1 lie:3 down:6 theorem:3 specific:4 exists:1 vapnik:5 bertschinger:1 suited:2 cherkassky:4 intersection:1 p15386:1 likely:1 infinitely:1 partially:1 ch:1 chance:1 marked:2 viewed:7 feasible:1 hyperplane:1 nil:1 pas:1 unpleasant:1 internal:1 arises:1 evaluate:1 tested:3 |
1,913 | 2,738 | Supervised graph inference
Jean-Philippe Vert
Centre de G?eostatistique
Ecole des Mines de Paris
35 rue Saint-Honor?e
77300 Fontainebleau, France
[email protected]
Yoshihiro Yamanishi
Bioinformatics Center
Institute for Chemical Research
Kyoto University
Uji, Kyoto 611-0011, Japan
[email protected]
Abstract
We formulate the problem of graph inference where part of the graph is
known as a supervised learning problem, and propose an algorithm to
solve it. The method involves the learning of a mapping of the vertices
to a Euclidean space where the graph is easy to infer, and can be formulated as an optimization problem in a reproducing kernel Hilbert space.
We report encouraging results on the problem of metabolic network reconstruction from genomic data.
1
Introduction
The problem of graph inference, or graph reconstruction, is to predict the presence or absence of edges between a set of points known to form the vertices of a graph, the prediction
being based on observations about the points. This problem has recently drawn a lot of attention in computational biology, where the reconstruction of various biological networks,
such as gene or molecular networks from genomic data, is a core prerequisite to the recent field of systems biology that aims at investigating the structures and properties of such
networks. As an example, the in silico reconstruction of protein interaction networks [1],
gene regulatory networks [2] or metabolic networks [3] from large-scale data generated by
high-throughput technologies, including genome sequencing or microarrays, is one of the
main challenges of current systems biology.
Various approaches have been proposed to solve the network inference problem. Bayesian
[2] or Petri networks [4] are popular frameworks to model the gene regulatory or the
metabolic network, and include methods to infer the network from data such as gene expression of metabolite concentrations [2]. In other cases, such as inferring protein interactions from gene sequences or gene expression, these models are less relevant and more
direct approaches involving the prediction of edges between ?similar? nodes have been
tested [5, 6].
These approaches are unsupervised, in the sense that they base their prediction on prior
knowledge about which edges should be present for a given set of points; this prior knowledge might for example be based on a model of conditional independence in the case of
Bayesian networks, or on the assumption that edges should connect similar points. The
actual situations we are confronted with, however, can often be expressed in a supervised
framework: besides the data about the vertices, part of the network is already known. This
is obviously the case with all network examples discussed above, and the real challenge
is to denoise the observed subgraph, if errors are assumed to be present, and to infer new
edges involving in particular nodes outside of the observed subgraph. In order to clarify
this point, let us take the example of an actual network inference problem that we treat
in the experiment below: the inference of the metabolic network from various genomic
data. The metabolic network is a graph of genes that involves only a subset of all the
genes of an organisms, known as enzymes. Enzymes can catalyze chemical reaction, and
an edge between two enzymes indicates that they can catalyze two successive reactions.
For most organisms, this graph is partially known, because many enzymes have already
been characterized. However many enzymes are also missing, and the problem is to detect
uncharacterized enzymes and place them in their correct location in the metabolic network.
Mathematically speaking, this means adding new edges involving new points, and eventually modifying edges in the known graph to remove mistakes from our current knowledge.
In this contribution we propose an algorithm for supervised graph inference, i.e., to infer
a graph from observations about the vertices and from the knowledge of part of the graph.
Several attempts have already been made to formalize the network inference problem as a
supervised machine learning problem [1, 7], but these attempts consist in predicting each
edge independently from each others using algorithms for supervised classification. We
propose below a radically different setting, where the known subgraph is used to extract a
new representation for the vertices, as points in a vector space, where the structure of the
graph is easier to infer than from the original observations. The edge inference engine in the
vector space is very simple (edges are inferred between nodes with similar representations),
and the learning step is limited to the construction of the mapping of the nodes onto the
vector space.
2
The supervised graph inference problem
Let us formally define the supervised graph inference problem. We suppose an undirected
simple graph G = (V, E) is given, where V = (v1 , . . . , vn ) ? V n is a set of vertices and
E ? V ? V is a set of edges. The problem is, given an additional set of vertices V 0 =
0
) ? V m , to infer a set of edges E 0 ? V 0 ? (V ? V 0 ) ? (V ? V 0 ) ? V 0 involving
(v10 , . . . , vm
the nodes in V 0 . In many situations of interest, in particular gene networks, the additional
nodes V 0 might be known in advance, but we do not make this assumption here to ensure
a level of generality as large as possible. For the applications we have in mind, the vertices
can be represented in V by a variety of data types, including but not limited to biological
sequences, molecular structures, expression profiles or metabolite concentrations. In order
to allow this diversity and take advantage of recent works on positive definite kernels on
general sets [8], we will assume that V is a set endowed
a positive definite kernel k,
Pwith
p
that is, a symmetric function k : V 2 ? R satisfying i,j=1 ai aj k(xi , xj ) ? 0 for any
p ? N, (a1 , . . . , an ) ? Rp and (x1 , . . . , xp ) ? V p .
3
From distance learning to graph inference
Suppose first that a graph must be inferred on p points (x1 , . . . , xp ) in the Euclidean space
Rd , without further information than ?similar points? should be connected. Then the simplest strategy to predict edges between the points is to put an edge between vertices that
are at a distance from each other smaller than a fixed threshold ?. More or less edges can
be inferred by varying the threshold. We call this strategy the ?direct? strategy. We now
propose to cast the supervised graph inference problem in a two step procedure:
? map the original points to a Euclidean space through a mapping f : V ? Rd ;
? apply the direct strategy to infer the network on the points {f (v), v ? V ? V 0 } .
While the second part of this procedure is fixed, the first part can be optimized by supervised learning of f using the known network. To do so we require the mapping f to map
adjacent vertices in the known graph to nearby positions in Rd , in order to ensure that the
known graph can be recovered to some extent by the direct strategy. Stated this way, the
problem of learning f appears similar to a problem of distance learning that has been raised
in the context of clustering [9], a important difference being that we need to define a new
representation of the points and therefore a new (Euclidean) distance not only for the points
in the training set, but also for points unknown during training.
Given a function f : V ? R, a possible criterion to assess whether connected (resp. disconnected) vertices are mapped onto similar (resp. dissimilar) points in R is the following:
P
P
2
2
(u,v)?E (f (u) ? f (v)) ?
(u,v)6?E (f (u) ? f (v))
.
(1)
R(f ) =
P
2
(u,v)?V 2 (f (u) ? f (v))
A small value of R(f ) ensures that connected vertices tend to be closer than disconnected
vertices (in a quadratic error sense). Observe that the numerator ensures an invariance of
R(f ) with respect to a scaling of f by a constant, which is consistent with the fact that the
direct strategy itself is invariant with respect to scaling of the points.
>
Let us denote by fV = (f (v1 ), . . . , f (vn )) ? Rn the values taken by f on the training
set, and by L the combinatorial Laplacian of the graph G, i.e., the n ? n matrix where Li,j
is equal to ?1P
(resp. 0) if i 6= j and vertices vi and vj are connected
P (resp. disconnected),
and Li,i = ? j6=i Li,j . If we restrict fV to have zero mean ( v?V f (v) = 0), then the
criterion (1) can be rewritten as follows:
R(f ) = 4
fV> LfV
? 2.
fV> fV
P
The obvious minimum of R(f ) under the constraint v?V f (v) = 0 is reached for any
function f such that fV is equal to the second largest eigenvector of L (the largest eigenvector of L begin the constant vector). However, this only defines the values of f on the
points V , but leaves indeterminacy on the values of f outside of V . Moreover, any arbitrary choice of f under a single constraint on fV is likely to be a mapping that overfits
the known graph at the expense of the capacity to infer the unknown edges. To overcome
both issues, we propose to regularize the criterion (1), by a smoothness functional on f ,
a classical approach in statistical learning [10, 11]. A convenient setting is to assume that
f belongs to the reproducing kernel Hilbert space (r.k.h.s.) H defined by the kernel k on
V, and to use the norm of f in the r.k.h.s. as a regularization operator. The regularized
criterion to be minimized becomes:
>
fV LfV + ?||f ||2H
min
,
(2)
f ?H0
fV> fV
P
wherePH0 = {f ? H : v?V f (v) = 0} is the subset of H orthogonal to the function
x 7? v?V k(x, v) in H and ? is a regularization parameter.
We note that [12] have recently and indenpendently proposed a similar formulation in the
context of clustering. The regularization parameter controls the trade-off between minimizing the original criterion (1) and ensuring that the solution has a small norm in the r.k.h.s.
When ? varies, the solution to (2) varies between to extremes:
? When ? is small, fV tends to the second largest eigenvector of the Laplacian L.
The regularization ensures that f is well defined as a function of V ? R, but f is
likely to overfit the known graph.
? When ? is large, the solution to (2) converges to the first kernel principal component (up to a scaling) [13], whatever the graph. Even though no supervised learning is performed in this case, one can observe that the resulting transformation,
when the first d kernel principal components are kept, is similar to the operation
performed in spectral clustering [14, 15] where points are mapped onto the first
few eigenvectors of a similarity matrix before being clustered.
Before showing how (2) is solved in practice, we must complete the picture by explaining
how the mapping f : V ? Rd is obtained. First note that the criterion in (2) is defined up
to a scaling of the functions, and the solution is therefore a direction in the r.k.h.s. In order
to extract a function, an additional
constraint must be set, such that imposing the norm
P
||f ||HV = 1, or imposing v?V f (v)2 = 1. The first solution correspond to an orthogonal
projection onto the direction selected in the r.k.h.s. (which would for example give the same
result as kernel PCA for large ?), while the second solution would provide a sphering of the
data. We tested both possibilities in practice and found very little difference, with however
slightly better results for the first solution (imposing ||f ||HV = 1). Second, the problem (2)
only defines a one-dimensional feature. In order to get a d-dimensional representation of
the vertices, we propose to iterate the minimization of (2) under orthogonality constraints
in the r.k.h.s., that is, we recursively define the i-th feature fi for i = 1, . . . , d by:
>
fV LfV + ?||f ||2H
.
(3)
fi =
arg min
fV> fV
f ?H0 ,f ?{f1 ,...,fi?1 }
4
Implementation
Let kV be the kernel obtained by centering k on the set V , i.e.,
1 X
1 X
1
kV (x, y) = k(x, y) ?
k(x, v) ?
k(y, v) + 2
n
n
n
v?V
v?V
X
k(v, v 0 ),
(v,v 0 )?V 2
and let HV be the r.k.h.s. associated with kV . Then it can easily be checked that HV = H0 ,
where H0 is defined in the previous section as the subset of H of the function with zero
mean on V . A simple extensions of the representer theorem [10] in the r.k.h.s. HV shows
that for any i = 1, . . . , d, the solution to (3) has an expansion of the form:
fi (x) =
n
X
?i,j kV (xj , x),
j=1
>
for some vector ?i = (?i,1 , . . . , ?i,n ) ? Rn . The corresponding vector fi,V can be
written in terms of ?i by fi,V = KV ?i , where KV is the Gram matrix of the kernel kV
on the set V , i.e., [KV ]i,j = kV (vi , vj) for i, j = 1, . . . , n. KV is obtained from the Gram
matrix K of the original kernel k by the classical formula KV = (I ? U )K(I ? U ), I
being the n ? n identity matrix and U being the constant n ? n matrix [U ]i,j = 1/n for
i, j = 1, . . . , n [13]. Besides, the norm in HV is equal to ||fi ||2HV = ?i> KV ?i , and the
orthogonality constraint between fi and fj in HV translates into ?i> KV ?j = 0. As a
result, the problem (2) is equivalent to the following:
>
? KV LKV ? + ??> KV ?
.
(4)
?i =
arg min
?> KV2 ?
??Rn ,?KV ?1 =...=?KV ?i?1 =0
Taking the differential of (4) with respect to ? to 0 we see that the first vector ?1 must solve
the following generalized eigenvector problem with the smallest (non-negative) generalized
eigenvalue:
(KV LKV + ?KV ) ? = ?KV2 ?.
(5)
This shows that ?1 must solve the following problem:
(LKV + ?I) ? = ?KV ?,
(6)
Regularization parameter (log2)
Regularization parameter (log2)
up to the addition of a vector satisfying K = 0. Hence any solution of (5) differs from
a solution of (6) by such an , which however does not change the corresponding function
f ? HV . It is therefore enough to solve (6) in order to find the first vector ?1 . K being
positive semidefinite, the other generalized eigenvectors of (6) are conjugate with respect
to KV , so it can easily be checked that the d vectors ?1 , . . . , ?d solving (4) are in fact the d
smallest generalized eigenvectors or (6). In practice, for large n, the generalized eigenvector problem (6) can be solved by first performing an incomplete Cholesky decomposition
of KV , see e.g. [16].
?4
?2
0
2
4
6
8
0
20
40
60
80
Number of features
100
?4
?2
0
2
4
6
8
0
Regularization parameter (log2)
(a) Train vs train
20
40
60
80
Number of features
100
(b) Test vs (Train + test)
?4
?2
0
2
4
6
8
0
20
40
60
80
Number of features
100
(c) Test vs test
Figure 1: ROC score for different numbers of features and regularization parameters, in a
5-fold cross-validation experiment with the integrated kernel (the color scale is adjusted to
highlight the variations inside each figure, the performance increases from blue to red).
5
Experiment
We tested the supervised graph inference method described in the previous section on the
problem of inferring a gene network of interest in computational biology: the metabolic
gene network, with enzymes present in an organism as vertices, and edges between two enzymes when they can catalyze successive chemical reactions [17]. Focusing on the budding
yeast S. cerevisiae, the network corresponding to our current knowledge of the network was
extracted from the KEGG database [18]. The resulting network contains 769 vertices and
7404 edges. In order to infer it, various independent data about the genes can be used.
We focus on three sources of data, likely to contain useful information to infer the graph:
a set of 157 gene expression measurement obtained from DNA microarrays [19, 20], the
phylogenetic profiles of the genes [21] as vectors of 145 bits indicating the presence or
absence of each gene in 145 fully sequenced genomes, and their localization in the cell
determined experimentally [22] as vectors of 23 bits indicating the presence of each gene
into each of 23 compartment of the cell. In each case a Gaussian RBF kernel was used to
represent the data as a kernel matrix. We denote these three datasets as ?exp?, ?phy? and
?loc? below. Additionally, we considered a fourth kernel obtained by summing the first
three kernels. This is a simple approach to data integration that has proved to be useful in
[23], for example. This integrated kernel is denoted ?int? below.
We performed 5-fold cross-validation experiments as follows. For each random split of
the set of genes into 80% (training set) and 20% (test set), the features are learned from
the subgraph with genes from the training set as vertices. The edges involving genes in
the test set are then predicted among all possible interactions involving the test set. The
performance of the inference is estimated in term of ROC curves (the plot of the percentage
of actual edges predicted as a function of the number of edges predicted although they are
not present), and in terms of the area under the ROC curve normalized between 0 and 1.
Notice that the set of possible interactions to be predicted is made of interactions between
two genes in the test set, on the one hand, and between one gene in the test set and one gene
in the training set, on the other hand. As it might be more challenging to infer an edge in
the former case, we compute two performances: first on the edges involving two nodes in
the test set, and second on the edges involving at least one vertex in the test set.
The algorithm contains 2 free parameters: the number d of features to be kept, and the
regularization parameter ? that prevents from overfitting the known graph. We varied ?
among the values 2i , for i = ?5, . . . , 8, and d between 1 and 100. Figure 1 displays the
performance in terms of ROC index for the graph inference with the integrated kernel, for
different values of d and ?. On the training set, it can be seen that the effect of increasing
? constantly decreases the performance of the graph reconstruction, which is natural since
smaller values of ? are expected to overfit the training graph. These results however justify
that the criterion (1), although not directly related to the ROC index of the graph reconstruction procedure, is a useful criterion to be optimized. As an example, for very small
values of ?, the ROC index on the training set is above 96%. The results on the test vs. test
and on the test vs. (train + test) experiments show that overfitting indeed occurs for small
? values, and that there is an optimum, both in terms of d and ?. The slight difference
between the performance landscapes in the experiments ?test vs. test? and ?test vs. (train
+ test)? show that the first one is indeed more difficult that the latter one, where some form
of overfitting is likely to occur (in the mapping of the vertices in the training set). In particular the ?test vs. test? seems to be more sensitive to the number of features selected that
the other setting. The abolute values of the ROC scures when 20 features are selected, for
varying ?, are shown in figure 2. For all kernels tested, overfitting occurs at small ? values,
and an optimum exists (around ? = 2 ? 10). The performance in the setting ?test vs.
(train+test)? is consistently better than that in the setting ?test vs. test?. Finally, and more
interestingly, the inference with the integrated kernel outperforms the inference with each
individual kernel. This is further highlighted in figure 3, where the ROC curves obtained
for 20 features and ? = 2 are shown.
References
[1] R. Jansen, H. Yu, D. Greenbaum, Y. Kluger, N.J. Krogan, S. Chung, A. Emili, M. Snyder,
J.F. Greenblatt, and M. Gerstein. A bayesian networks approach for predicting protein-protein
1
0.9
0.9
ROC index
ROC index
1
0.8
0.7
0.6
0.8
0.7
0.6
0.5
?4
?2
0
2
4
6
Regularization parameter (log2)
0.5
8
?4
1
1
0.9
0.9
0.8
0.7
0.6
0.5
8
(b) Localization kernel)
ROC index
ROC index
(a) Expression kernel
?2
0
2
4
6
Regularization parameter (log2)
0.8
0.7
0.6
?4
?2
0
2
4
6
Regularization parameter (log2)
0.5
8
?4
(c) Phylogenetic kernel
?2
0
2
4
6
Regularization parameter (log2)
8
(d) Integrated kernel
100
100
80
80
True Positives (%)
True Positives (%)
Figure 2: ROC scores for different regularization parameters when 20 features are selected.
Different pictures represent different kernels. In each picture, the dashed blue line, dashdot red line and continuous black line correspond respectively to the ROC index on the
training vs training set, the test vs (training + test) set, and the test vs test set.
60
40
20
0
0
20
40
60
False positives (%)
Kexp
Kloc
Kphy
Kint
Krand
80
(a) Test vs. (train+test)
100
60
40
Kexp
Kloc
Kphy
Kint
Krand
20
0
0
20
40
60
80
False positives (%)
(b) Test vs. test)
Figure 3: ROC with 20 features selected and ? = 2 for the various kernels.
100
interactions from genomic data. Science, 302(5644), 2003.
[2] N. Friedman, M. Linial, I. Nachman, and D. Pe?er. Using bayesian networks to analyze expression data. Journal of Computational Biology, 7:601?620, 2000.
[3] M. Kanehisa. Prediction of higher order functional networks from genomic data. Pharmacogenomics, 2(4):373?385, 2001.
[4] A. Doi, H. Matsuno, M. Nagasaki, and S. Miyano. Hybrid petri net representation of gene
regulatory network. In Proceedings of PSB 5, pages 341?352, 2000.
[5] E.M. Marcotte, M. Pellegrini, H.-L. Ng, D.W. Rice, T.O. Yeates, and D. Eisenberg. Detecting protein function and protein-protein interactions from genome sequences. Science,
285(5428):751?753, 1999.
[6] F. Pazos and A. Valencia. Similarity of phylogenetic trees as indicator of protein?protein interaction. Protein Engineering, 9(14):609?614, 2001.
[7] J. R. Bock and D. A. Gough. Predicting protein-protein interactions from primary structure.
Bioinformatics, 17:455?460, 2001.
[8] B. Schr?olkopf, K. Tsuda, and J.-P. Vert. Kernel methods in computational biology. MIT Press,
2004.
[9] E.P. Xing, A.Y. Ng, M.I. Jordan, and S. Russell. Distance metric learning with application to
clustering with side-information. In NIPS 15, pages 505?512. MIT Press, 2003.
[10] G. Wahba. Splines Models for Observational Data. Series in Applied Mathematics, Vol. 59,
SIAM, Philadelphia, 1990.
[11] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures.
Neural Computation, 7(2):219?269, 1995.
[12] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for
learning from examples. Technical Report TR-2004-06, University of Chicago, 2004.
[13] B. Sch?olkopf, A. J. Smola, and K.-R. M?uller. Kernel principal component analysis. In
B. Sch?olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector
Learning, pages 327?352. MIT Press, 1999.
[14] Y. Weiss. Segmentation using eigenvectors: a unifying view. In Proceedings of the IEEE
International Conference on Computer Vision, pages 975?982. IEEE Computer Society, 1999.
[15] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In
NIPS 14, pages 849?856, MIT Press, 2002.
[16] F. R. Bach and M. I. Jordan. Kernel independent component analysis. Journal of Machine
Learning Research, 3:1?48, 2002.
[17] J.-P. Vert and M. Kanehisa. Graph-driven features extraction from microarray data using diffusion kernels and kernel CCA. In NIPS 15. MIT Press, 2003.
[18] M. Kanehisa, S. Goto, S. Kawashima, and A. Nakaya. The KEGG databases at genomenet.
Nucleic Acids Research, 30:42?46, 2002.
[19] P. T. Spellman, G. Sherlock, M. Q. Zhang, K. Anders, M. B. Eisen, P. O. Brown, D. Botstein,
and B. Futcher. Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization. Mol. Biol. Cell, 9:3273?3297, 1998.
[20] M. Eisen, P. Spellman, P. O. Brown, and D. Botstein. Cluster analysis and display of genomewide expression patterns. PNAS, 95:14863?14868, 1998.
[21] M. Pellegrini, E. M. Marcotte, M. J. Thompson, D. Eisenberg, and T. O. Yeates. Assigning protein functions by comparative genome analysis: protein phylogenetic profiles. PNAS,
96(8):4285?4288, 1999.
[22] W.K. Huh, J.V. Falco, C. Gerke, A.S. Carroll, R.W. Howson, J.S. Weissman, and E.K. O?Shea.
Global analysis of protein localization in budding yeast. Nature, 425:686?691, 2003.
[23] Y. Yamanishi, J.-P. Vert, A. Nakaya, and M. Kanehisa. Extraction of correlated gene clusters
from multiple genomic data by generalized kernel canonical correlation analysis. Bioinformatics, 19:i323?i330, 2003.
| 2738 |@word norm:4 seems:1 decomposition:1 tr:1 recursively:1 phy:1 contains:2 score:2 loc:1 series:1 ecole:1 interestingly:1 outperforms:1 reaction:3 current:3 recovered:1 assigning:1 must:5 written:1 dashdot:1 lkv:3 chicago:1 girosi:1 remove:1 plot:1 v:15 kint:2 leaf:1 selected:5 core:1 detecting:1 node:7 location:1 successive:2 org:1 zhang:1 phylogenetic:4 direct:5 differential:1 inside:1 indeed:2 expected:1 encouraging:1 actual:3 little:1 increasing:1 becomes:1 begin:1 moreover:1 eigenvector:5 transformation:1 control:1 whatever:1 positive:7 before:2 engineering:1 treat:1 tends:1 mistake:1 might:3 black:1 howson:1 challenging:1 limited:2 practice:3 definite:2 differs:1 procedure:3 area:1 vert:5 convenient:1 projection:1 protein:15 get:1 onto:4 operator:1 put:1 context:2 silico:1 kawashima:1 equivalent:1 map:2 center:1 missing:1 attention:1 independently:1 thompson:1 formulate:1 regularize:1 variation:1 resp:4 construction:1 suppose:2 satisfying:2 database:2 observed:2 solved:2 hv:9 ensures:3 connected:4 cycle:1 trade:1 decrease:1 russell:1 kv2:2 mine:2 solving:1 localization:3 linial:1 easily:2 various:5 represented:1 train:7 doi:1 outside:2 h0:4 jean:2 solve:5 niyogi:1 emili:1 highlighted:1 itself:1 obviously:1 confronted:1 sequence:3 advantage:1 eigenvalue:1 net:1 yoshihiro:1 propose:6 reconstruction:6 interaction:9 relevant:1 kuicr:1 subgraph:4 kv:22 olkopf:3 cluster:2 optimum:2 yamanishi:2 comparative:1 converges:1 ac:1 v10:1 indeterminacy:1 predicted:4 involves:2 metabolite:2 direction:2 correct:1 modifying:1 kluger:1 observational:1 require:1 f1:1 clustered:1 biological:2 mathematically:1 adjusted:1 extension:1 clarify:1 around:1 considered:1 exp:1 pellegrini:2 mapping:7 predict:2 genomewide:1 smallest:2 combinatorial:1 nachman:1 sensitive:1 largest:3 minimization:1 uller:1 mit:5 genomic:6 gaussian:1 cerevisiae:2 aim:1 varying:2 focus:1 saccharomyces:1 consistently:1 sequencing:1 indicates:1 sense:2 detect:1 inference:18 anders:1 integrated:5 france:1 issue:1 classification:1 arg:2 among:2 denoted:1 jansen:1 raised:1 integration:1 field:1 equal:3 extraction:2 ng:3 pwith:1 biology:6 yu:1 unsupervised:1 throughput:1 representer:1 jones:1 petri:2 minimized:1 report:2 others:1 spline:1 few:1 belkin:1 comprehensive:1 individual:1 attempt:2 friedman:1 interest:2 possibility:1 extreme:1 semidefinite:1 edge:25 closer:1 poggio:1 orthogonal:2 tree:1 incomplete:1 euclidean:4 tsuda:1 vertex:20 subset:3 connect:1 varies:2 international:1 siam:1 vm:1 off:1 chung:1 li:3 japan:1 de:3 diversity:1 fontainebleau:1 int:1 vi:2 performed:3 view:1 lot:1 overfits:1 analyze:1 reached:1 red:2 xing:1 contribution:1 ass:1 compartment:1 acid:1 correspond:2 landscape:1 bayesian:4 identification:1 j6:1 checked:2 centering:1 obvious:1 associated:1 proved:1 popular:1 knowledge:5 color:1 hilbert:2 formalize:1 segmentation:1 appears:1 focusing:1 higher:1 supervised:12 botstein:2 wei:2 formulation:1 though:1 generality:1 smola:2 correlation:1 overfit:2 hand:2 defines:2 aj:1 yeast:3 effect:1 contain:1 normalized:1 true:2 brown:2 former:1 regularization:16 hence:1 chemical:3 symmetric:1 adjacent:1 during:1 numerator:1 criterion:8 generalized:6 complete:1 fj:1 nakaya:2 fi:8 recently:2 functional:2 jp:1 discussed:1 organism:3 slight:1 measurement:1 imposing:3 ai:1 smoothness:1 rd:4 mathematics:1 centre:1 similarity:2 carroll:1 base:1 enzyme:8 recent:2 belongs:1 yeates:2 driven:1 honor:1 seen:1 minimum:1 additional:3 dashed:1 multiple:1 pnas:2 kyoto:3 infer:11 technical:1 characterized:1 cross:2 bach:1 huh:1 molecular:2 weissman:1 a1:1 laplacian:2 ensuring:1 prediction:4 involving:8 vision:1 metric:1 kernel:34 represent:2 sequenced:1 cell:4 addition:1 source:1 microarray:2 sch:2 tend:1 goto:1 undirected:1 greenbaum:1 valencia:1 hybridization:1 jordan:3 call:1 marcotte:2 presence:3 split:1 easy:1 enough:1 variety:1 independence:1 xj:2 iterate:1 architecture:1 restrict:1 wahba:1 microarrays:2 translates:1 whether:1 expression:7 pca:1 speaking:1 useful:3 eigenvectors:4 simplest:1 dna:1 percentage:1 psb:1 canonical:1 notice:1 estimated:1 blue:2 vol:1 snyder:1 threshold:2 drawn:1 diffusion:1 kept:2 v1:2 graph:34 fourth:1 place:1 vn:2 gerstein:1 scaling:4 bit:2 cca:1 display:2 fold:2 quadratic:1 kexp:2 occur:1 constraint:5 orthogonality:2 nearby:1 min:3 performing:1 sphering:1 disconnected:3 catalyze:3 conjugate:1 smaller:2 slightly:1 invariant:1 kegg:2 taken:1 eventually:1 pharmacogenomics:1 mind:1 operation:1 endowed:1 rewritten:1 prerequisite:1 apply:1 observe:2 spectral:2 rp:1 original:4 clustering:5 include:1 ensure:2 saint:1 log2:7 unifying:1 classical:2 society:1 already:3 occurs:2 strategy:6 concentration:2 primary:1 regulated:1 distance:5 mapped:2 capacity:1 manifold:1 extent:1 uncharacterized:1 besides:2 index:8 minimizing:1 difficult:1 expense:1 stated:1 negative:1 implementation:1 unknown:2 budding:2 observation:3 nucleic:1 datasets:1 philippe:2 situation:2 schr:1 rn:3 varied:1 reproducing:2 arbitrary:1 falco:1 inferred:3 cast:1 paris:1 optimized:2 engine:1 fv:14 learned:1 nip:3 below:4 pattern:1 challenge:2 sherlock:1 including:2 natural:1 hybrid:1 regularized:1 predicting:3 indicator:1 spellman:2 technology:1 picture:3 greenblatt:1 extract:2 philadelphia:1 prior:2 geometric:1 eisenberg:2 fully:1 highlight:1 bock:1 validation:2 futcher:1 xp:2 consistent:1 miyano:1 metabolic:7 editor:1 free:1 side:1 allow:1 burges:1 institute:1 explaining:1 taking:1 overcome:1 curve:3 gram:2 genome:4 eisen:2 made:2 gene:25 global:1 overfitting:4 investigating:1 summing:1 assumed:1 krogan:1 xi:1 continuous:1 regulatory:3 uji:1 additionally:1 nature:1 mol:1 expansion:1 rue:1 vj:2 main:1 profile:3 denoise:1 x1:2 roc:15 inferring:2 position:1 pe:1 theorem:1 formula:1 showing:1 er:1 consist:1 exists:1 false:2 adding:1 shea:1 easier:1 likely:4 prevents:1 expressed:1 partially:1 sindhwani:1 radically:1 constantly:1 extracted:1 rice:1 conditional:1 identity:1 formulated:1 kanehisa:4 rbf:1 absence:2 change:1 experimentally:1 determined:1 justify:1 principal:3 invariance:1 indicating:2 formally:1 cholesky:1 support:1 latter:1 dissimilar:1 bioinformatics:3 tested:4 biol:1 correlated:1 |
1,914 | 2,739 | An Application of Boosting to
Graph Classification
Taku Kudo,
Eisaku Maeda
NTT Communication Science Laboratories.
2-4 Hikaridai, Seika-cho, Soraku, Kyoto, Japan
{taku,maeda}@cslab.kecl.ntt.co.jp
Yuji Matsumoto
Nara Institute of Science and Technology.
8916-5 Takayama-cho, Ikoma, Nara, Japan
[email protected]
Abstract
This paper presents an application of Boosting for classifying labeled
graphs, general structures for modeling a number of real-world data, such
as chemical compounds, natural language texts, and bio sequences. The
proposal consists of i) decision stumps that use subgraph as features,
and ii) a Boosting algorithm in which subgraph-based decision stumps
are used as weak learners. We also discuss the relation between our algorithm and SVMs with convolution kernels. Two experiments using
natural language data and chemical compounds show that our method
achieves comparable or even better performance than SVMs with convolution kernels as well as improves the testing efficiency.
1
Introduction
Most machine learning (ML) algorithms assume that given instances are represented in
numerical vectors. However, much real-world data is not represented as numerical vectors,
but as more complicated structures, such as sequences, trees, or graphs. Examples include
biological sequences (e.g., DNA and RNA), chemical compounds, natural language texts,
and semi-structured data (e.g., XML and HTML documents).
Kernel methods, such as support vector machines (SVMs) [11], provide an elegant solution
to handling such structured data. In this approach, instances are implicitly mapped into a
high-dimensional space, where information about their similarities (inner-products) is only
used for constructing a hyperplane for classification. Recently, a number of kernels have
been proposed for such structured data, such as sequences [7], trees [2, 5], and graphs [6].
Most are based on the idea that a feature vector is implicitly composed of the counts of
substructures (e.g., subsequences, subtrees, subpaths, or subgraphs).
Although kernel methods show remarkable performance, their implicit definitions of feature space make it difficult to know what kind of features (substructures) are relevant or
which features are used in classifications. To use ML algorithms for data mining or as
knowledge discovery tools, they must output a list of relevant features (substructures). This
information may be useful not only for a detailed analysis of individual data but for the human decision-making process.
In this paper, we present a new machine learning algorithm for classifying labeled graphs
that has the following characteristics: 1) It performs learning and classification using the
Figure 1: Labeled connected graphs and subgraph relation
structural information of a given graph. 2) It uses a set of all subgraphs (bag-of-subgraphs)
as a feature set without any constraints, which is essentially the same idea as a convolution
kernel [4]. 3) Even though the size of the candidate feature set becomes quite large, it
automatically selects a compact and relevant feature set based on Boosting.
2
Classifier for Graphs
We first assume that an instance is represented in a labeled graph. The focused problem
can be formalized as a general problem called the graph classification problem. The graph
classification problem is to induce a mapping f (x) : X ? {?1}, from given training
examples T = {hxi , yi i}L
i=1 , where xi ? X is a labeled graph and yi ? {?1} is a class
label associated with the training data. We here focus on the problem of binary classification. The important characteristic is that input example xi is represented not as a numerical
feature vector but as a labeled graph.
2.1
Preliminaries
In this paper we focus on undirected, labeled, and connected graphs, since we can easily
extend our algorithm to directed or unlabeled graphs with minor modifications. Let us introduce a labeled connected graph (or simply a labeled graph), its definitions and notations.
Definition 1 Labeled Connected Graph
A labeled graph is represented in a 4-tuple G = (V, E, L, l), where V is a set of vertices,
E ? V ? V is a set of edges, L is a set of labels, and l : V ? E ? L is a mapping that
assigns labels to the vertices and the edges. A labeled connected graph is a labeled graph
such that there is a path between any pair of verticies.
Definition 2 Subgraph
Let G0 = (V 0 , E 0 , L0 , l0 ) and G = (V, E, L, l) be labeled connected graphs. G0 matches G,
or G0 is a subgraph of G (G0 ? G) if the following conditions are satisfied: (1) V 0 ? V ,
(2) E 0 ? E, (3) L0 ? L, and (4) l0 = l. If G0 is a subgraph of G, then G is a supergraph
of G0 .
Figure 1 shows an example of a labeled graph and its subgraph and non-subgraph.
2.2
Decision Stumps
Decision stumps are simple classifiers in which the final decision is made by a single hypothesis or feature. Boostexter [10] uses word-based decision stumps for text classification.
To classify graphs, we define the subgraph-based decision stumps as follows.
Definition 3 Decision Stumps for Graphs
Let t and x be labeled graphs and y be a class label (y ? {?1}). A decision stump
classifier for graphs is given by
y t?x
def
hht,yi (x) =
?y otherwise.
The parameter for classification is a tuple ht, yi, hereafter referred to as a rule of decision
stumps. The decision stumps are trained to find a rule ht?, y?i that minimizes the error rate
for the given training data T = {hxi , yi i}L
i=1 :
ht?, y?i =
argmin
t?F ,y?{?1}
L
L
1 X
1X
I(yi 6= hht,yi (xi )) = argmin
(1 ? yi hht,yi (xi )),
L
t?F ,y?{?1} 2L
i=1
(1)
i=1
SL
where F is a set of candidate graphs or a feature set (i.e., F = i=1 {t|t ? xi }) and I(?)
is the indicator function. The gain function for a rule ht, yi is defined as
def
gain(ht, yi) =
L
X
yi hht,yi (xi ).
(2)
i=1
Using the gain, the search problem (1) becomes equivalent to the problem: h t?, y?i =
argmaxt?F ,y?{?1} gain(ht, yi). In this paper, we use gain instead of error rate for clarity.
2.3
Applying Boosting
The decision stump classifiers are too inaccurate to be applied to real applications, since the
final decision relies on the existence of a single graph. However, accuracies can be boosted
by the Boosting algorithm [3, 10]. Boosting repeatedly calls a given weak learner and
finally produces a hypothesis f , which is a linear combination of K hypotheses produced
PK
by the weak learners, i,e.: f (x) = sgn( k=1 ?k hhtk ,yk i (x)). A weak learner is built
(k)
(k)
at each iteration k with different distributions or weights d(k) = (di , . . . , dL ) on the
PL (k)
(k)
training data, where i=1 di = 1, di ? 0. The weights are calculated to concentrate
more on hard examples than easy examples. To use decision stumps as the weak learner of
Boosting, we redefine the gain function (2) as:
def
gain(ht, yi) =
L
X
yi di hht,yi (xi ).
(3)
i=1
In this paper, we use the AdaBoost algorithm, the original and the best known algorithm
among many variants of Boosting. However, it is trivial to fit our decision stumps to other
boosting algorithms, such as Arc-GV [1] and Boosting with soft margins [8].
3
Efficient Computation
In this section, we introduce an efficient and practical algorithm to find the optimal rule
ht?, y?i from given training data. This problem is formally defined as follows.
Problem 1 Find Optimal Rule
Let T = {hx1 , y1 , d1 i, . . . , hxL , yL , dL i} be training data where xi is a labeled graph,
PL
yi ? {?1} is a class label associated with xi and di ( i=1 di = 1, di ? 0) is a normalized weight assigned to xi . Given T , find the optimal rule ht?, y?i that maximizes the gain,
SL
i.e., ht?, y?i = argmaxt?F ,y?{?1} di yi hht,yi , where F = i=1 {t|t ? xi }.
The most naive and exhaustive method in which we first enumerate all subgraphs F and
then calculate the gains for all subgraphs is usually impractical, since the number of subgraphs is exponential to its size. We thus adopt an alternative strategy to avoid such exhaustive enumerations. The method to find the optimal rule is modeled as a variant of
branch-and-bound algorithm and will be summarized as the following strategies: 1) Define
Figure 2: Example of DFS Code Tree for a graph
a canonical search space in which a whole set of subgraphs can be enumerated. 2) Find
the optimal rule by traversing this search space. 3) Prune the search space by proposing a
criteria for the upper bound of the gain. We will describe these steps more precisely in the
next subsections.
3.1
Efficient Enumeration of Graphs
Yan et al. proposed an efficient depth-first search algorithm to enumerate all subgraphs
from a given graph [12]. The key idea of their algorithm is a DFS (depth first search)
code, a lexicographic order to the sequence of edges. The search tree given by the DFS
code is called a DFS Code Tree. Leaving the details to [12], the order of the DFS code
is defined by the lexicographic order of labels as well as the topology of graphs. Figure 2
illustrates an example of a DFS Code Tree. Each node in this tree is represented in a 5-tuple
[i, j, vi , eij , vj ], where eij , vi and vj are the labels of i?j edge, i-th vertex, and j-th vertex
respectively. By performing a pre-order search of the DFS Code Tree, we can obtain all the
subgraphs of a graph in order of their DFS code. However, one cannot avoid isomorphic
enumerations even giving pre-order traverse, since one graph can have several DFS codes
in a DFS Code Tree. So, canonical DFS code (minimum DFS code) is defined as its first
code in the pre-order search of the DFS Code Tree. Yan et al. show that two graphs G
and G0 are isomorphic if and only if minimum DFS codes for the two graphs min(G)
and min(G0 ) are the same. We can thus ignore non-minimum DFS codes in subgraph
enumerations. In other words, in depth-first traverse, we can prune a node with DFS code
c, if c is not minimum. The isomorphic graph represented in minimum code has already
been enumerated in the depth-first traverse. For example, in Figure 2, if G1 is identical to
G0 , G0 has been discovered before the node for G1 is reached. This property allows us to
avoid an explicit isomorphic test of the two graphs.
3.2
Upper bound of gain
DFS Code Tree defines a canonical search space in which one can enumerate all subgraphs
from a given set of graphs. We consider an upper bound of the gain that allows pruning of
subspace in this canonical search space. The following lemma gives a convenient method
of computing a tight upper bound on gain(ht0 , yi) for any supergraph t0 of t.
Lemma 1 Upper bound of the gain: ?(t)
For any t0 ? t and y ? {?1}, the gain of ht0 , yi is bounded by ?(t) (i.e., gain(ht0 yi) ? ?(t)),
where ?(t) is given by
?(t)
def
=
max 2
X
di ?
{i|yi =+1,t?xi }
L
X
yi ? di , 2
X
di +
{i|yi =?1,t?xi }
i=1
L
X
i=1
Proof 1
gain(ht0 , yi) =
L
X
i=1
di yi hht0 ,yi (xi ) =
L
X
i=1
yi ? di .
di yi ? y ? (2I(t0 ? xi ) ? 1),
where I(?) is the indicator function. If we focus on the case y = +1, then
gain(ht0 , +1i)
=
2
X
yi di ?
{i|t0 ?xi }
?
2
X
L
X
yi ? di ? 2
{i|yi =+1,t?xi }
L
X
?
L
X
yi ? di
i=1
yi ? di ,
i=1
since |{i|yi = +1, t0 ? xi }| ? |{i|yi = +1, t ? xi }|
gain(ht0 , ?1i)
di ?
{i|yi =+1,t0 ?xi }
i=1
di ?
X
2
X
for any t0 ? t. Similarly,
di +
{i|yi =?1,t?xi }
L
X
yi ? di .
i=1
Thus, for any t0 ? t and y ? {?1}, gain(ht0 , yi) ? ?(t). 2
We can efficiently prune the DFS Code Tree using the upper bound of gain u(t). During
pre-order traverse in a DFS Code Tree, we always maintain the temporally suboptimal gain
? among all the gains calculated previously. If ?(t) < ? , the gain of any supergraph t 0 ? t
is no greater than ? , and therefore we can safely prune the search space spanned from the
subgraph t. If ?(t) ? ? , then we cannot prune this space since a supergraph t0 ? t might
exist such that gain(t0 ) ? ? .
3.3
Efficient Computation in Boosting
At each Boosting iteration, the suboptimal value ? is reset to 0. However, if we can calculate a tighter upper bound in advance, the search space can be pruned more effectively. For
this purpose, a cache is used to maintain all rules found in the previous iterations. Suboptimal value ? is calculated by selecting one rule from the cache that maximizes the gain of
the current distribution. This idea is based on our observation that a rule in the cache tends
to be reused as the number of Boosting iterations increases. Furthermore, we also maintain
the search space built by a DFS Code Tree as long as memory allows. This cache reduces
duplicated constructions of a DFS Code Tree at each Boosting iteration.
4
Connection to Convolution Kernel
Recent studies [1, 9, 8] have shown that both Boosting and SVMs [11] work according to
similar strategies: constructing an optimal hypothesis that maximizes the smallest margin
between positive and negative examples. The difference between the two algorithms is the
metric of margin; the margin of Boosting is measured in l1 -norm, while that of SVMs is
measured in l2 -norm. We describe how maximum margin properties are translated in the
two algorithms.
AdaBoost and Arc-GV asymptotically solve the following linear program, [1, 9, 8],
max
w?IRJ ,??IR+
?; s.t. yi
J
X
wj hj (xi ) ? ?, ||w||1 = 1
(4)
j=1
where J is the number of hypotheses. Note that in the case of decision stumps for graphs,
J = |{?1} ? F| = 2|F|.
SVMs, on the other hand, solve the following quadratic optimization problem [11]:
max
w?IRJ ,??IR+
1
?; s.t. yi ? (w ? ?(xi )) ? ?, ||w||2 = 1.
For simplicity, we omit the bias term (b) and the extension of Soft Margin.
1
(5)
The function ?(x) maps the original input example x into a J-dimensional feature vector
(i.e., ?(x) ? IRJ ). The l2 -norm margin gives the separating hyperplane expressed by dotproducts in feature space. The feature space in SVMs is thus expressed implicitly by using
a Marcer kernel function, which is a generalized dot-product between two objects, (i.e.,
K(x1 , x2 ) = ?(x1 ) ? ?(x2 )).
The best known kernel for modeling structured data is a convolution kernel [4] (e.g., string
kernel [7] and tree kernel [2, 5]), which argues that a feature vector is implicitly composed
of the counts of substructures. 2 The implicit mapping defined by the convolution kernel
is given as: ?(x) = (#(t1 ? x), . . . , #(t|F | ? x)), where tj ? F and #(u) is the
cardinality of u. Noticing that a decision stump can be expressed as hht,yi (x) = y ? (2I(t ?
x) ? 1), we see that the constraints or feature space of Boosting with substructure-based
decision stumps are essentially the same as those of SVMs with the convolution kernel 3 .
The critical difference is the definition of margin: Boosting uses l1 -norm, and SVMs use
l2 -norm. The difference between them can be explained by sparseness.
It is well known that the solution or separating hyperplane of SVMs is expressed in a linear
PL
combination of training examples using coefficients ?, (i.e., w =
i=1 ?i ?(xi )) [11].
Maximizing l2 -norm margin gives a sparse solution in the example space, (i.e., most of ?i
becomes 0). Examples having non-zero coefficients are called support vectors that form the
final solution. Boosting, in contrast, performs the computation explicitly in feature space.
The concept behind Boosting is that only a few hypotheses are needed to express the final
solution. l1 -norm margin realizes such a property [8]. Boosting thus finds a sparse solution
in the feature space. The accuracies of these two methods depend on the given training
data. However, we argue that Boosting has the following practical advantages. First, sparse
hypotheses allow the construction of an efficient classification algorithm. The complexity
of SVMs with tree kernel is O(l|n1 ||n2 |), where n1 and n2 are trees, and l is the number of
support vectors, which is too heavy to be applied to real applications. Boosting, in contrast,
performs faster since the complexity depends only on a small number of decision stumps.
Second, sparse hypotheses are useful in practice as they provide ?transparent? models with
which we can analyze how the model performs or what kind of features are useful. It is
difficult to give such analysis with kernel methods since they define feature space implicitly.
5
Experiments and Discussion
To evaluate our algorithm, we employed two experiments using two real-world data.
(1) Cellphone review classification (REV)
The goal of this task is to classify reviews for cellphones as positive or negative. 5,741 sentences were collected from an Web-BBS discussion about cellphones in which users were
directed to submit positive reviews separately from negative reviews. Each sentence is represented in a word-based dependency tree using a Japanese dependency parser CaboCha 4 .
(2) Toxicology prediction of chemical compounds (PTC)
The task is to classify chemical compounds by carcinogenicity. We used the PTC data
set5 consisting of 417 compounds with 4 types of test animals: male mouse (MM), female
2
Strictly speaking, graph kernel [6] is not a convolution kernel because it is not based on the count
of subgraphs, but on random walks in a graph.
3
The difference between decision stumps and the convolution kernels is that the former uses a
binary feature denoting the existence (or absence) of each substructure, whereas the latter uses the
cardinality of each substructure. However, it makes little difference since a given graph is often sparse
and the cardinality of substructures will be approximated by their existence.
4
http://chasen.naist.jp/? taku/software/cabocha/
5
http://www.predictive-toxicology.org/ptc/
Table 1: Classification F-scores of the REV and PTC tasks
REV
MM
Boosting
SVMs
BOL-based Decision Stumps
Subgraph-based Decision Stumps
BOL Kernel
Tree/Graph Kernel
76.6
79.0
77.2
79.4
47.0
48.9
40.9
42.3
PTC
FM
MR
52.9
52.5
39.9
34.1
42.7
55.1
43.9
53.2
FR
26.9
48.5
21.8
25.9
mouse (FM), male rat (MR) and female rat (FR). Each compound is assigned one of the
following labels: {EE,IS,E,CE,SE,P,NE,N}. We here assume that CE,SE, and P are ?positive? and that NE and NN are ?negative?, which is exactly the same setting as [6]. We thus
have four binary classifiers (MM/FM/MR/FR) in this data set.
We compared the performance of our Boosting algorithm and support vector machines with
tree kernel [2, 5] (for REV) and graph kernel [6] (for PTC) according to their F-score in
5-fold cross validation.
Table 1 summarizes the best results of REV and PCT task, varying the hyperparameters
of Boosting and SVMs (e.g., maximum iteration of Boosting, soft margin parameter of
SVMs, and termination probability of random walks in graph kernel [6]). We also show
the results with bag-of-label (BOL) features as a baseline. In most tasks and categories,
ML algorithms with structural features outperform the baseline systems (BOL). These results support our first intuition that structural features are important for the classification of
structured data, such as natural language texts and chemical compounds.
Comparing our Boosting algorithm with SVMs using tree kernel, no significant difference
can be found the REV data set. However, in the PTC task, our method outperforms SVMs
using graph kernel on the categories MM, FM, and FR at a statistically significant level.
Furthermore, the number of active features (subgraphs) used in Boosting is much smaller
than those of SVMs. With our methods, about 1800 and 50 features (subgraphs) are used
in the REV and PTC tasks respectively, while the potential number of features is quite
large. Even giving all subgraphs as feature candidates, Boosting selects a small and highly
relevant subset of features.
Figure 3 show an example of extracted support features (subgraphs) in the REV and PTC
task respectively. In the REV task, features reflecting the domain knowledge (cellphone
reviews) are extracted: 1) ?want to use ?? positive, 2) ?hard to use?? negative, 3)
?recharging time is short? ? positive, 4) ?recharging time is long? ? negative. These
features are interesting because we cannot determine the correct label (positive/negative)
only using such bag-of-label features as ?charging,? ?short,? or ?long.? In the PTC task,
similar structures show different behavior. For instance, Trihalomethanes (TTHMs), wellknown carcinogenic substances (e.g., chloroform, bromodichloromethane, and chlorodibromomethane), contain the common substructure H-C-Cl (Fig. 3(a)). However, TTHMs
do not contain the similar but different structure H-C(C)-Cl (Fig. 3(b)). Such structural
information is useful for analyzing how the system classifies the input data in a category
and what kind of features are used in the classification. We cannot examine such analysis
in kernel methods, since they define their feature space implicitly.
The reason why graph kernel shows poor performance on the PTC data set is that it cannot
identify subtle difference between two graphs because it is based on a random walks in a
graph. For example, kernel dot-product between the similar but different structures 3(c)
and 3(d) becomes quite large, although they show different behavior. To classify chemical
compounds by their functions, the system must be capable of capturing subtle differences
among given graphs.
The testing speed of our Boosting algorithm is also much faster than SVMs with tree/graph
Figure 3: Support features and their weights
kernels. In the REV task, the speed of Boosting and SVMs are 0.135 sec./1,149 instances
and 57.91 sec./1,149 instances respectively6 . Our method is significantly faster than SVMs
with tree/graph kernels without a discernible loss of accuracy.
6
Conclusions
In this paper, we focused on an algorithm for the classification of labeled graphs. The
proposal consists of i) decision stumps that use subtrees as features, and ii) a Boosting
algorithm in which subgraph-based decision stumps are applied as the weak learners. Two
experiments are employed to confirm the importance of subgraph features. In addition, we
experimentally show that our Boosting algorithm is accurate and efficient for classification
tasks involving discrete structural features.
References
[1] Leo Breiman. Prediction games and arching algoritms. Neural Computation, 11(7):1493 ?
1518, 1999.
[2] Michael Collins and Nigel Duffy. Convolution kernels for natural language. In NIPS 14, Vol.1,
pages 625?632, 2001.
[3] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning
and an application to boosting. Journal of Computer and System Sicences, 55(1):119?139,
1996.
[4] David Haussler. Convolution kernels on discrete structures. Technical report, UC Santa Cruz
(UCS-CRL-99-10), 1999.
[5] Hisashi Kashima and Teruo Koyanagi. Svm kernels for semi-structured data. In Proc. of ICML,
pages 291?298, 2002.
[6] Hisashi Kashima, Koji Tsuda, and Akihiro Inokuchi. Marginalized kernels between labeled
graphs. In Proc. of ICML, pages 321?328, 2003.
[7] Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. Text
classification using string kernels. Journal of Machine Learning Research, 2, 2002.
[8] Gunnar. R?atsch, Takashi. Onoda, and Klaus-Robert Mu? ller. Soft margins for AdaBoost. Machine Learning, 42(3):287?320, 2001.
[9] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: a new
explanation for the effectiveness of voting methods. In Proc. of ICML, pages 322?330, 1997.
[10] Robert E. Schapire and Yoram Singer. BoosTexter: A boosting-based system for text categorization. Machine Learning, 39(2/3):135?168, 2000.
[11] Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, 1998.
[12] Xifeng Yan and Jiawei Han. gspan: Graph-based substructure pattern mining. In Proc. of
ICDM, pages 721?724, 2002.
6
We tested the performances on Linux with XEON 2.4Ghz dual processors.
| 2739 |@word norm:7 reused:1 lodhi:1 termination:1 set5:1 cellphone:4 score:2 hereafter:1 selecting:1 denoting:1 document:1 outperforms:1 current:1 comparing:1 must:2 john:1 cruz:1 numerical:3 discernible:1 gv:2 short:2 boosting:38 node:3 traverse:4 org:1 supergraph:4 consists:2 redefine:1 interscience:1 introduce:2 behavior:2 seika:1 examine:1 automatically:1 little:1 enumeration:4 cache:4 cardinality:3 becomes:4 classifies:1 notation:1 bounded:1 maximizes:3 what:3 kind:3 argmin:2 minimizes:1 string:2 proposing:1 impractical:1 safely:1 voting:1 exactly:1 classifier:5 bio:1 omit:1 before:1 positive:7 t1:1 tends:1 analyzing:1 path:1 might:1 co:1 statistically:1 directed:2 practical:2 testing:2 practice:1 yan:3 significantly:1 convenient:1 word:3 induce:1 pre:4 hx1:1 cannot:5 unlabeled:1 applying:1 www:1 equivalent:1 map:1 maximizing:1 focused:2 formalized:1 simplicity:1 assigns:1 subgraphs:15 rule:11 haussler:1 d1:1 spanned:1 chasen:1 construction:2 parser:1 user:1 us:5 hypothesis:8 approximated:1 labeled:19 akihiro:1 calculate:2 wj:1 connected:6 sun:1 yk:1 intuition:1 mu:1 complexity:2 cristianini:1 trained:1 depend:1 tight:1 predictive:1 efficiency:1 learner:6 translated:1 easily:1 algoritms:1 represented:8 leo:1 describe:2 klaus:1 exhaustive:2 saunders:1 quite:3 solve:2 otherwise:1 g1:2 final:4 sequence:5 advantage:1 product:3 reset:1 fr:4 relevant:4 subgraph:14 boostexter:2 produce:1 categorization:1 object:1 measured:2 minor:1 concentrate:1 dfs:21 correct:1 human:1 sgn:1 ucs:1 transparent:1 generalization:1 preliminary:1 biological:1 tighter:1 enumerated:2 extension:1 pl:3 strictly:1 mm:4 mapping:3 achieves:1 adopt:1 smallest:1 purpose:1 proc:4 bag:3 label:11 realizes:1 tool:1 lexicographic:2 rna:1 always:1 avoid:3 hj:1 boosted:1 varying:1 breiman:1 takashi:1 l0:4 focus:3 contrast:2 baseline:2 nn:1 inaccurate:1 jiawei:1 relation:2 selects:2 classification:17 html:1 among:3 dual:1 animal:1 uc:1 having:1 argmaxt:2 identical:1 icml:3 report:1 few:1 composed:2 wee:1 individual:1 consisting:1 maintain:3 n1:2 mining:2 highly:1 male:2 behind:1 tj:1 subtrees:2 accurate:1 tuple:3 edge:4 capable:1 traversing:1 tree:24 taylor:1 koji:1 walk:3 tsuda:1 instance:6 classify:4 modeling:2 soft:4 xeon:1 yoav:2 vertex:4 subset:1 too:2 nigel:1 dependency:2 cho:2 yuji:1 lee:1 yl:1 michael:1 mouse:2 linux:1 satisfied:1 japan:2 potential:1 stump:22 summarized:1 sec:2 hisashi:2 coefficient:2 explicitly:1 vi:2 depends:1 analyze:1 reached:1 complicated:1 substructure:10 ir:2 accuracy:3 characteristic:2 efficiently:1 identify:1 hxl:1 weak:6 produced:1 craig:1 processor:1 definition:6 associated:2 di:22 proof:1 gain:26 duplicated:1 knowledge:2 subsection:1 improves:1 subtle:2 reflecting:1 adaboost:3 though:1 furthermore:2 implicit:2 hand:1 web:1 defines:1 normalized:1 concept:1 contain:2 former:1 assigned:2 chemical:7 eisaku:1 laboratory:1 toxicology:2 during:1 game:1 rat:2 criterion:1 generalized:1 theoretic:1 performs:4 l1:3 argues:1 recently:1 common:1 jp:3 extend:1 significant:2 similarly:1 language:5 shawe:1 dot:2 hxi:2 han:1 similarity:1 recent:1 female:2 dotproducts:1 wellknown:1 compound:9 binary:3 yi:45 minimum:5 greater:1 mr:3 employed:2 prune:5 determine:1 taku:3 ller:1 semi:2 ii:2 branch:1 kyoto:1 reduces:1 ntt:2 technical:1 match:1 faster:3 kudo:1 cross:1 long:3 nara:2 icdm:1 prediction:2 variant:2 involving:1 essentially:2 metric:1 iteration:6 kernel:36 proposal:2 whereas:1 want:1 separately:1 addition:1 leaving:1 elegant:1 undirected:1 effectiveness:1 call:1 structural:5 ee:1 easy:1 fit:1 topology:1 suboptimal:3 fm:4 inner:1 idea:4 t0:10 bartlett:1 peter:1 soraku:1 speaking:1 repeatedly:1 enumerate:3 useful:4 detailed:1 se:2 santa:1 svms:20 category:3 dna:1 http:2 schapire:3 sl:2 exist:1 outperform:1 canonical:4 discrete:2 vol:1 express:1 key:1 four:1 gunnar:1 ptc:11 clarity:1 ce:2 ht:10 ht0:7 graph:58 asymptotically:1 noticing:1 decision:26 summarizes:1 comparable:1 capturing:1 def:4 bound:8 pct:1 fold:1 quadratic:1 constraint:2 precisely:1 x2:2 software:1 speed:2 min:2 pruned:1 performing:1 cslab:1 structured:6 according:2 combination:2 poor:1 smaller:1 rev:10 making:1 modification:1 explained:1 previously:1 discus:1 count:3 needed:1 know:1 singer:1 kashima:2 alternative:1 existence:3 original:2 include:1 marginalized:1 yoram:1 giving:2 hikaridai:1 g0:10 already:1 strategy:3 subspace:1 mapped:1 separating:2 chris:1 argue:1 collected:1 nello:1 trivial:1 reason:1 code:23 modeled:1 vladimir:1 difficult:2 robert:4 negative:7 upper:7 convolution:11 observation:1 matsumoto:1 arc:2 communication:1 y1:1 discovered:1 david:1 pair:1 connection:1 sentence:2 huma:1 naist:2 nip:1 usually:1 pattern:1 maeda:2 program:1 built:2 max:3 memory:1 explanation:1 charging:1 critical:1 natural:5 indicator:2 xml:1 technology:1 temporally:1 ne:2 naive:1 text:6 review:5 discovery:1 l2:4 freund:2 loss:1 interesting:1 remarkable:1 validation:1 kecl:1 classifying:2 heavy:1 bias:1 allow:1 institute:1 sparse:5 ghz:1 calculated:3 depth:4 world:3 made:1 ikoma:1 bb:1 pruning:1 compact:1 ignore:1 implicitly:6 confirm:1 ml:3 active:1 xi:24 recharging:2 subsequence:1 search:14 bol:4 why:1 table:2 onoda:1 japanese:1 cl:2 constructing:2 domain:1 vj:2 submit:1 pk:1 whole:1 hyperparameters:1 n2:2 inokuchi:1 x1:2 fig:2 referred:1 wiley:1 explicit:1 exponential:1 candidate:3 watkins:1 substance:1 list:1 svm:1 dl:2 vapnik:1 effectively:1 importance:1 illustrates:1 sparseness:1 margin:13 duffy:1 koyanagi:1 simply:1 eij:2 expressed:4 relies:1 extracted:2 goal:1 absence:1 crl:1 hard:2 experimentally:1 hyperplane:3 lemma:2 called:3 hht:7 isomorphic:4 atsch:1 formally:1 support:7 latter:1 collins:1 evaluate:1 tested:1 handling:1 |
1,915 | 274 | On the Distribution of the Number of Local Minima
On the Distribution of the Number of Local
Minima of a Random Function on a Graph
Pierre Baldi
JPL, Caltech
Pasadena, CA 91109
1
Yosef Rinott
UCSD
La Jolla, CA 92093
Charles Stein
Stanford University
Stanford, CA 94305
INTRODUCTION
Minimization of energy or error functions has proved to be a useful principle in
the design and analysis of neural networks and neural algorithms. A brief list of
examples include: the back- propagation algorithm, the use of optimization methods
in computational vision, the application of analog networks to the approximate
solution of NP complete problems and the Hopfield model of associative memory.
In the Hopfield model associative memory, for instance, a quadratic Hamiltonian of
the form
x, = ?1
(1)
is constructed to tailor a particular "landscape" on the n- dimensional hypercube
Hn
{-I, l}n and store memories at a particular subset of the local minima of F
on Hn. The synaptic weights Wij are usually constructed incrementally, using a form
of Hebb's rule applied to the patterns to be stored. These patterns are often chosen
at random. As the number of stored memories grows to and beyond saturation, the
energy function F becomes essentially random. In addition, in a general context of
combinatorial optimization, every problem in NP can be (polynomially) reduced to
the problem of minimizing a certain quadratic form over Hn.
=
These two types of considerations, associative memory and combinatorial optimization, motivate the study of the number and distribution of local minima of a random function F defined over the hypercube, or more generally, any graph G. Of
course, different notions of randomness can be introduced. In the case where F is a
727
728
Baldi, Rinott and Stein
quadratic form as in (1), we could take the coefficients Wij to be independent identically distributed gaussian random variables, which yields, in fact, the SherringtonKirkpatrick long-range spin glass model of statistical physics. For this model, the
expectation of the number of local minima is well known but no rigorous results
have been obtained for its distribution (even the variance is not known precisely).
A simpler model of randomness can then be introduced, where the values F(x) of
the random function at each vertex are assigned randomly and independently from
a common distribution: This is in fact the random energy model of Derrida (1981).
2
THE MAIN RESULT
In Baldi, Rinott and Stein (1989) the following general result on random energy
models is proven.
Let G = (V, E) be a regular d-graph, i.e., a graph where every vertex
has the same number d of neighbors. Let F be a random function on V
whose values are independentlY distributed with a common continuous
distribution. Let W be the number of local minima of F, i.e., the number
of vertices x satisfying F(x) > F(y) for any neighbor y of x (i.e., (x, Y)fE).
Let EW A and Var W u 2 ? Then
=
=
EW=
ill
d+1
(2)
and for any positive real w:
(3)
where 4> is the standard normal distribution and C is an absolute constant.
Remarks:
(a) The proof of (3) ((2) is obvious) is based on a method developed in Stein (1986).
(b) The bound given in the theorem is not asymptotic but holds also for small
graphs.
(c) If 1V 1-+ 00 the theorem states that if u -+ 00 then the distribution of the
number of local minima approaches a normal distribution and (3) gives also a bound
of 0(u- 1/ 2 ) on the rate of convergence.
(d) The function F simply induces a ranking (or a random permutation) of the
vertices of G.
(e) The bound in (3) may not be optimal. We suspect that the optimal rate should
scale like u- 1 rather than u- 1/ 2 ?
On the Distribution of the Number of Local Minima
3
EXAMPLES OF APPLICATIONS
(1) Consider a n x n square lattice (see fig.1) with periodic boundary conditions.
Here, IVnl n 2 and d 4. The expected number of local minima is
=
=
n2
5
EWn = -
(4)
and a simple calculations shows that
13n 2
VarWn = 225 .
(5)
Therefore Wn is asymptotically normal and the rate of convergence is bounded by
O(n-l/2).
(2) Consider a n x n square lattice, where this time the neighbors of a vertex v are
all the points in same row or column as v (see fig.2). This example arises in game
theory, where the rows (resp. columns) correspond to different possible strategies of
one of two players. The energy value can be interpreted as the cost of the combined
n 2 and d 2n - 2. The expected number of
choice of two strategies. Here IVnl
local minima (the Nash equilibrium points of game theory) Wn is
=
=
n
= 2n-1
2
EWn
n
2
~-
(6)
and
n 2 (n - 1)
n
Var Wn = 2(2n _ 1)2 ~ S?
(7)
Therefore Wn is asymptotically normal and the rate of convergence is bounded by
O(n- 1/ 4).
(3) Consider the n-dimensional hypercube H n = (Vn, En) (see fig.3). Then
2n and d = n. The expected number of local minima Wn is:
2n
EWn= - - =A n
n+1
and
2n - 1 (n - 1)
Var Wn =
(n + 1)2
= u~.
1 Vn
1=
(8)
(9)
Therefore Wn is asymptotically normal and in fact:
.~
1)1/42(n-l)/4 = O( V n/2n).
IP{wn < w) - cI> ( w-An)1 < (n _ cv'nTI
Un
(10)
In contrast, if the edges of H n are randomly and independently oriented with probability .5, then the distribution of the number of vertices having all their adjacent
edges oriented inward is asymptotically Poisson with mean 1.
729
730
Baldi, Rinott and Stein
References
P. Baldi, Y. Rinott (1989), "Asymptotic Normality of Some Graph-Related Statistics," Journal of Applied Probability, 26, 171-175.
P. Baldi and Y. Rinott (1989), "On Normal Approximation of Distribution in Terms
of Dependency Graphs," Annals of Probability, in press.
P. Baldi, Y. Rinott and C. Stein (1989), "A Normal Approximation for the Number
of Local Maxima of a Random Function on a Graph," In: Probability, Statistics and
Mathematics: Papers in Honor of Samuel Karlin. T.W. Anderson, K.B . Athreya
and D.L. Iglehard, Editors, Academic Press.
B. Derrida (1981), "Random Energy Model: An Exactly Solvable Model of Disordered Systems," Physics Review, B24, 2613- 2626.
C. M. Macken and A. S. Perelson (1989), "Protein Evolution on Rugged Landscapes", PNAS, 86, 6191-6195.
C. Stein (1986), "Approximate Computation of Expectations," Institute of Mathematical Statistics Lecture Notes, S.S. Gupta Series Editor, Volume 7.
On the Distribution of the Number of Local Minima
10
15
....
5
8
It
4. .
2..
I
14
.9
12. "-._--_.16
Figure 1:
Figure 2:
3
.
13
A ranking of a 4 x 4 square lattice with periodic boundary conditions
and four local minima (d =4).
10
...,
5
8
--
6
1\
~
'5
-
""
2
I
U,
'2
16
3..
9
?
13
A ranking of a 4 x 4 square lattice. The neighbors of a vertex are all
the points on the same row and column. There are three local minima
(d = 6).
731
732
Baldi, Rinott and Stein
8
,.?1
." " I
/
I
//
2.
I
~/
I
6
51
4
I? -
/
/
/
/
/
Figure 3:
..,
A ranking of H3 with two local minima (d = 3),
| 274 |@word hypercube:3 evolution:1 assigned:1 strategy:2 disordered:1 adjacent:1 game:2 samuel:1 series:1 complete:1 hold:1 consideration:1 minimizing:1 normal:7 charles:1 equilibrium:1 common:2 fe:1 volume:1 design:1 analog:1 combinatorial:2 cv:1 hamiltonian:1 mathematics:1 minimization:1 gaussian:1 ucsd:1 simpler:1 rather:1 mathematical:1 constructed:2 introduced:2 jolla:1 baldi:8 store:1 certain:1 contrast:1 rigorous:1 nti:1 expected:3 honor:1 glass:1 caltech:1 beyond:1 minimum:15 usually:1 pattern:2 pasadena:1 wij:2 saturation:1 becomes:1 memory:5 bounded:2 pnas:1 ill:1 inward:1 interpreted:1 academic:1 calculation:1 solvable:1 long:1 developed:1 normality:1 rugged:1 having:1 brief:1 every:2 vision:1 essentially:1 exactly:1 expectation:2 poisson:1 np:2 athreya:1 review:1 randomly:2 oriented:2 addition:1 positive:1 asymptotic:2 local:16 lecture:1 permutation:1 proven:1 var:3 suspect:1 ivnl:2 principle:1 editor:2 range:1 identically:1 wn:8 course:1 row:3 edge:2 institute:1 neighbor:4 absolute:1 distributed:2 boundary:2 instance:1 regular:1 column:3 protein:1 polynomially:1 remark:1 context:1 lattice:4 useful:1 approximate:2 subset:1 generally:1 vertex:7 cost:1 stein:8 independently:3 induces:1 stored:2 reduced:1 dependency:1 periodic:2 combined:1 rule:1 continuous:1 un:1 physic:2 ca:3 notion:1 resp:1 annals:1 four:1 hn:3 main:1 satisfying:1 graph:8 n2:1 asymptotically:4 fig:3 en:1 tailor:1 coefficient:1 hebb:1 ranking:4 vn:2 nash:1 bound:3 quadratic:3 theorem:2 motivate:1 square:4 spin:1 precisely:1 variance:1 list:1 yield:1 correspond:1 jpl:1 landscape:2 rinott:8 gupta:1 hopfield:2 ci:1 randomness:2 yosef:1 synaptic:1 simply:1 whose:1 stanford:2 energy:6 obvious:1 statistic:3 proof:1 proved:1 associative:3 ip:1 karlin:1 back:1 pierre:1 anderson:1 la:1 player:1 convergence:3 ew:2 include:1 propagation:1 incrementally:1 arises:1 derrida:2 h3:1 grows:1 |
1,916 | 2,740 | Semi-supervised Learning
by Entropy Minimization
Yves Grandvalet ?
Heudiasyc, CNRS/UTC
60205 Compi`egne cedex, France
[email protected]
Yoshua Bengio
Dept. IRO, Universit?e de Montr?eal
Montreal, Qc, H3C 3J7, Canada
[email protected]
Abstract
We consider the semi-supervised learning problem, where a decision rule
is to be learned from labeled and unlabeled data. In this framework, we
motivate minimum entropy regularization, which enables to incorporate
unlabeled data in the standard supervised learning. Our approach includes other approaches to the semi-supervised problem as particular or
limiting cases. A series of experiments illustrates that the proposed solution benefits from unlabeled data. The method challenges mixture models when the data are sampled from the distribution class spanned by the
generative model. The performances are definitely in favor of minimum
entropy regularization when generative models are misspecified, and the
weighting of unlabeled data provides robustness to the violation of the
?cluster assumption?. Finally, we also illustrate that the method can also
be far superior to manifold learning in high dimension spaces.
1
Introduction
In the classical supervised learning classification framework, a decision rule is to be learned
from a learning set Ln = {xi , yi }ni=1 , where each example is described by a pattern xi ? X
and by the supervisor?s response yi ? ? = {?1 , . . . , ?K }. We consider semi-supervised
learning, where the supervisor?s responses are limited to a subset of Ln .
In the terminology used here, semi-supervised learning refers to learning a decision rule on
X from labeled and unlabeled data. However, the related problem of transductive learning,
i.e. of predicting labels on a set of predefined patterns, is addressed as a side issue. Semisupervised problems occur in many applications where labeling is performed by human
experts. They have been receiving much attention during the last few years, but some
important issues are unresolved [10].
In the probabilistic framework, semi-supervised learning can be modeled as a missing data
problem, which can be addressed by generative models such as mixture models thanks
to the EM algorithm and extensions thereof [6].Generative models apply to the joint density of patterns and class (X, Y ). They have appealing features, but they also have major
drawbacks. Their estimation is much more demanding than discriminative models, since
the model of P (X, Y ) is exhaustive, hence necessarily more complex than the model of
?
This work was supported in part by the IST Programme of the European Community, under the
PASCAL Network of Excellence IST-2002-506778. This publication only reflects the authors? views.
P (Y |X). More parameters are to be estimated, resulting in more uncertainty in the estimation process. The generative model being more precise, it is also more likely to be
misspecified. Finally, the fitness measure is not discriminative, so that better models are
not necessarily better predictors of class labels. These difficulties have lead to proposals
aiming at processing unlabeled data in the framework of supervised classification [1, 5, 11].
Here, we propose an estimation principle applicable to any probabilistic classifier, aiming
at making the most of unlabeled data when they are beneficial, while providing a control
on their contribution to provide robustness to the learning scheme.
2
2.1
Derivation of the Criterion
Likelihood
We first recall how the semi-supervised learning problem fits into standard supervised
learning by using the maximum (conditional) likelihood estimation principle. The learning
set is denoted Ln = {xi , zi }ni=1 , where z ? {0, 1}K denotes the dummy variable representing the actually available labels (while y represents the precise and complete class
information): if xi is labeled ?k , then zik = 1 and zi` = 0 for ` 6= k; if xi is unlabeled,
then zi` = 1 for ` = 1, . . . , K.
We assume that labeling is missing at random, that is, for all unlabeled examples,
P (z|x, ?k ) = P (z|x, ?` ), for any (?k , ?` ) pair, which implies
zk P (?k |x)
.
P (?k |x, z) = PK
`=1 z` P (?` |x)
(1)
Assuming independent examples, the conditional log-likelihood of (Z|X) on the observed
sample is then
!
n
K
X
X
L(?; Ln ) =
log
zik fk (xi ; ?) + h(zi ) ,
(2)
i=1
k=1
where h(z), which does not depend on P (X, Y ), is only affected by the missingness mechanism, and fk (x; ?) is the model of P (?k |x) parameterized by ?.
This criterion is a concave function of fk (xi ; ?), and for simple models such as the ones
provided by logistic regression, it is also concave in ?, so that the global solution can
be obtained by numerical optimization. Maximizing (2) corresponds to maximizing the
complete likelihood if no assumption whatsoever is made on P (X) [6].
Provided fk (xi ; ?) sum to one, the likelihood is not affected by unlabeled data: unlabeled
data convey no information. In the maximum a posteriori (MAP) framework, Seeger remarks that unlabeled data are useless regarding discrimination when the priors on P (X)
and P (Y |X) factorize [10]: observing x does not inform about y, unless the modeler
assumes so. Benefitting from unlabeled data requires assumptions of some sort on the relationship between X and Y . In the Bayesian framework, this will be encoded by a prior
distribution. As there is no such thing like a universally relevant prior, we should look for
an induction bias exploiting unlabeled data when the latter is known to convey information.
2.2
When Are Unlabeled Examples Informative?
Theory provides little support to the numerous experimental evidences [5, 7, 8] showing
that unlabeled examples can help the learning process. Learning theory is mostly developed
at the two extremes of the statistical paradigm: in parametric statistics where examples are
known to be generated from a known class of distribution, and in the distribution-free Structural Risk Minimization (SRM) or Probably Approximately Correct (PAC) frameworks.
Semi-supervised learning, in the terminology used here, does not fit the distribution-free
frameworks: no positive statement can be made without distributional assumptions, as for
some distributions P (X, Y ) unlabeled data are non-informative while supervised learning
is an easy task. In this regard, generalizing from labeled and unlabeled data may differ
from transductive inference.
In parametric statistics, theory has shown the benefit of unlabeled examples, either for specific distributions [9], or for mixtures of the form P (x) = pP (x|?1 ) + (1 ? p)P (x|?2 )
where the estimation problem is essentially reduced to the one of estimating the mixture
parameter p [4]. These studies conclude that the (asymptotic) information content of unlabeled examples decreases as classes overlap.1 Thus, the assumption that classes are well
separated is sensible if we expect to take advantage of unlabeled examples.
The conditional entropy H(Y |X) is a measure of class overlap, which is invariant to the
parameterization of the model. This measure is related to the usefulness of unlabeled data
where labeling is indeed ambiguous. Hence, we will measure the conditional entropy of
class labels conditioned on the observed variables
H(Y |X, Z) = ?EXY Z [log P (Y |X, Z)] ,
(3)
where EX denotes the expectation with respect to X.
In the Bayesian framework, assumptions are encoded by means of a prior on the model
parameters. Stating that we expect a high conditional entropy does not uniquely define the
form of the prior distribution, but the latter can be derived by resorting to the maximum
entropy principle.2 Let (?, ?) denote the model parameters of P (X, Y, Z); the maximum
entropy prior verifying E?? [H(Y |X, Z)] = c, where the constant c quantifies how small
the entropy should be on average, takes the form
P (?, ?) ? exp (??H(Y |X, Z))) ,
(4)
where ? is the positive Lagrange multiplier corresponding to the constant c.
Computing H(Y |X, Z) requires a model of P (X, Y, Z) whereas the choice of the diagnosis paradigm is motivated by the possibility to limit modeling to conditional probabilities.
We circumvent the need of additional modeling by applying the plug-in principle, which
consists in replacing the expectation with respect to (X, Z) by the sample average. This
substitution, which can be interpreted as ?modeling? P (X, Z) by its empirical distribution,
yields
n K
1 XX
Hemp (Y |X, Z; Ln ) = ?
P (?k |xi , zi ) log P (?k |xi , zi ) .
(5)
n i=1
k=1
This empirical functional is plugged in (4) to define an empirical prior on parameters ?,
that is, a prior whose form is partly defined from data [2].
2.3
Entropy Regularization
Recalling that fk (x; ?) denotes the model of P (?k |x), the model of P (?k |x, z) (1) is
defined as follows:
zk fk (x; ?)
gk (x, z; ?) = PK
.
`=1 z` f` (x; ?)
For labeled data, gk (x, z; ?) = zk , and for unlabeled data, gk (x, z; ?) = fk (x; ?).
From now on, we drop the reference to parameter ? in fk and gk to lighten notation. The
1
This statement, given explicitly by [9], is also formalized, though not stressed, by [4], where
the Fisher information for unlabeled examples at the estimate p? is clearly a measure of the overlap
R (P (x|?1 )?P (x|?2 ))2
between class conditional densities: Iu (?
p) = pP
dx.
? (x|?1 )+(1?p)P
? (x|?2 )
2
Here, maximum entropy refers to the construction principle which enables to derive distributions
from constraints, not to the content of priors regarding entropy.
MAP estimate is the maximizer of the posterior distribution, that is, the maximizer of
C(?, ?; Ln ) = L(?; Ln ) ? ?Hemp (Y |X, Z; Ln )
!
n
K
n X
K
X
X
X
=
log
zik fk (xi ) + ?
gk (xi , zi ) log gk (xi , zi ) , (6)
i=1
i=1 k=1
k=1
where the constant terms in the log-likelihood (2) and log-prior (4) have been dropped.
While L(?; Ln ) is only sensitive to labeled data, Hemp (Y |X, Z; Ln ) is only affected by
the value of fk (x) on unlabeled data.
Note that the approximation Hemp (5) of H (3) breaks down for wiggly functions fk (?) with
abrupt changes between data points (where P (X) is bounded from below). As a result, it is
important to constrain fk (?) in order to enforce the closeness of the two functionals. In the
following experimental section, we imposed a smoothness constraint on fk (?) by adding to
the criterion C (6) a penalizer with its corresponding Lagrange multiplier ?.
3
Related Work
Self-Training Self-training [7] is an iterative process, where a learner imputes the labels
of examples which have been classified with confidence in the previous step. Amini et al.
[1] analyzed this technique and shown that it is equivalent to a version of the classification
EM algorithm, which minimizes the likelihood deprived of the entropy of the partition. In
the context of conditional likelihood with labeled and unlabeled examples, the criterion is
!
K
n
K
X
X
X
log
zik fk (xi ) +
gk (xi ) log gk (xi ) ,
i=1
k=1
k=1
which is recognized as an instance of the criterion (6) with ? = 1.
Self-confident logistic regression [5] is another algorithm optimizing the criterion for ? =
1. Using smaller ? values is expected to have two benefits: first, the influence of unlabeled
examples can be controlled, in the spirit of the EM-? [8], and second, slowly increasing
? defines a scheme similar to deterministic annealing, which should help the optimization
process to avoid poor local minima of the criterion.
Minimum entropy methods Minimum entropy regularizers have been used in other contexts to encode learnability priors (e.g. [3]). In a sense, Hemp can be seen as a poor?s man
way to generalize this approach to continuous input spaces. This empirical functional was
also used by Zhu et al. [13, Section 6] as a criterion to learn weight function parameters in
the context of transduction on manifolds for learning.
Input-Dependent Regularization Our criterion differs from input-dependent regularization [10, 11] in that it is expressed only in terms of P (Y |X, Z) and does not involve
P (X). However, we stress that for unlabeled data, the regularizer agrees with the complete
likelihood provided P (X) is small near the decision surface. Indeed, whereas a generative model would maximize log P (X) on the unlabeled data, our criterion minimizes the
conditional entropy on the same points. In addition, when the model is regularized (e.g.
with weight decay), the conditional entropy is prevented from being too small close to the
decision surface. This will favor putting the decision surface in a low density area.
4
4.1
Experiments
Artificial Data
In this section, we chose a simple experimental setup in order to avoid artifacts stemming
from optimization problems. Our goal is to check to what extent supervised learning can
be improved by unlabeled examples, and if minimum entropy can compete with generative
models which are usually advocated in this framework.
The minimum entropy regularizer is applied to the logistic regression model. It is compared
to logistic regression fitted by maximum likelihood (ignoring unlabeled data) and logistic
regression with all labels known. The former shows what has been gained by handling
unlabeled data, and the latter provides the ?crystal ball? performance obtained by guessing
correctly all labels. All hyper-parameters (weight-decay for all logistic regression models
plus the ? parameter (6) for minimum entropy) are tuned by ten-fold cross-validation.
Minimum entropy logistic regression is also compared to the classic EM algorithm for
Gaussian mixture models (two means and one common covariance matrix estimated by
maximum likelihood on labeled and unlabeled examples, see e.g. [6]). Bad local maxima
of the likelihood function are avoided by initializing EM with the parameters of the true
distribution when the latter is a Gaussian mixture, or with maximum likelihood parameters
on the (fully labeled) test sample when the distribution departs from the model. This initialization advantages EM, since it is guaranteed to pick, among all local maxima of the
likelihood, the one which is in the basin of attraction of the optimal value. Furthermore,
this initialization prevents interferences that may result from the ?pseudo-labels? given to
unlabeled examples at the first E-step. In particular, ?label switching? (i.e. badly labeled
clusters) is avoided at this stage.
Correct joint density model In the first series of experiments, we consider two-class
problems in an 50-dimensional input space. Each class is generated with equal probability
from a normal distribution. Class ?1 is normal with mean (aa . . . a) and unit covariance
matrix. Class ?2 is normal with mean ?(aa . . . a) and unit covariance matrix. Parameter
a tunes the Bayes error which varies from 1 % to 20 % (1 %, 2.5 %, 5 %, 10 %, 20 %).
The learning sets comprise nl labeled examples, (nl = 50, 100, 200) and nu unlabeled
examples, (nu = nl ? (1, 3, 10, 30, 100)). Overall, 75 different setups are evaluated, and
for each one, 10 different training samples are generated. Generalization performances are
estimated on a test set of size 10 000.
This benchmark provides a comparison for the algorithms in a situation where unlabeled
data are known to convey information. Besides the favorable initialization of the EM algorithm to the optimal parameters, EM benefits from the correctness of the model: data
were generated according to the model, that is, two Gaussian subpopulations with identical
covariances. The logistic regression model is only compatible with the joint distribution,
which is a weaker fulfillment than correctness.
As there is no modeling bias, differences in error rates are only due to differences in estimation efficiency. The overall error rates (averaged over all settings) are in favor of minimum
entropy logistic regression (14.1 ? 0.3 %). EM (15.6 ? 0.3 %) does worse on average than
logistic regression (14.9 ? 0.3 %). For reference, the average Bayes error rate is 7.7 % and
logistic regression reaches 10.4 ? 0.1 % when all examples are labeled.
Figure 1 provides more informative summaries than these raw numbers. The plots represent the error rates (averaged over nl ) versus Bayes error rate and the nu /nl ratio. The
first plot shows that, as asymptotic theory suggests [4, 9], unlabeled examples are mostly
informative when the Bayes error is low. This observation validates the relevance of the
minimum entropy assumption. This graph also illustrates the consequence of the demanding parametrization of generative models. Mixture models are outperformed by the simple
logistic regression model when the sample size is low, since their number of parameters
grows quadratically (vs. linearly) with the number of input features.
The second plot shows that the minimum entropy model takes quickly advantage of unlabeled data when classes are well separated. With nu = 3nl , the model considerably
improves upon the one discarding unlabeled data. At this stage, the generative models do
not perform well, as the number of available examples is low compared to the number of
parameters in the model. However, for very large sample sizes, with 100 times more unla-
15
Test Error (%)
Test Error (%)
40
30
20
10
10
5
5
10
15
Bayes Error (%)
20
1
3
10
Ratio n /n
u
30
100
l
Figure 1: Left: test error vs. Bayes error rate for nu /nl = 10; right: test error vs. nu /nl
ratio for 5 % Bayes error (a = 0.23). Test errors of minimum entropy logistic regression (?)
and mixture models (+). The errors of logistic regression (dashed), and logistic regression
with all labels known (dash-dotted) are shown for reference.
beled examples than labeled examples, the generative approach eventually becomes more
accurate than the diagnosis approach.
Misspecified joint density model In a second series of experiments, the setup is slightly
modified by letting the class-conditional densities be corrupted by outliers. For each class,
the examples are generated from a mixture of two Gaussians centered on the same mean:
a unit variance component gathers 98 % of examples, while the remaining 2 % are generated from a large variance component, where each variable has a standard deviation of 10.
The mixture model used by EM is slightly misspecified since it is a simple Gaussian mixture. The results, displayed in the left-hand-side of Figure 2, should be compared with the
right-hand-side of Figure 1. The generative model dramatically suffers from the misspecification and behaves worse than logistic regression for all sample sizes. The unlabeled
examples have first a beneficial effect on test error, then have a detrimental effect when
they overwhelm the number of labeled examples. On the other hand, the diagnosis models
behave smoothly as in the previous case, and the minimum entropy criterion performance
improves.
20
30
Test Error (%)
Test Error (%)
25
15
10
20
15
10
5
5
1
3
10
Ratio nu/nl
30
100
0
1
3
10
Ratio nu/nl
30
100
Figure 2: Test error vs. nu /nl ratio for a = 0.23. Average test errors for minimum entropy
logistic regression (?) and mixture models (+). The test error rates of logistic regression
(dotted), and logistic regression with all labels known (dash-dotted) are shown for reference. Left: experiment with outliers; right: experiment with uninformative unlabeled data.
The last series of experiments illustrate the robustness with respect to the cluster assumption, by testing it on distributions where unlabeled examples are not informative, and where
a low density P (X) does not indicate a boundary region. The data is drawn from two Gaussian clusters like in the first series of experiment, but the label is now independent of the
clustering: an example x belongs to class ?1 if x2 > x1 and belongs to class ?2 otherwise:
the Bayes decision boundary is now separates each cluster in its middle. The mixture model
is unchanged. It is now far from the model used to generate data. The right-hand-side plot
of Figure 1 shows that the favorable initialization of EM does not prevent the model to be
fooled by unlabeled data: its test error steadily increases with the amount of unlabeled data.
On the other hand, the diagnosis models behave well, and the minimum entropy algorithm
is not distracted by the two clusters; its performance is nearly identical to the one of training with labeled data only (cross-validation provides ? values close to zero), which can be
regarded as the ultimate performance in this situation.
Comparison with manifold transduction Although our primary goal is to infer a decision function, we also provide comparisons with a transduction algorithm of the ?manifold
family?. We chose the consistency method of Zhou et al. [12] for its simplicity. As suggested by the authors, we set ? = 0.99 and the scale parameter ? 2 was optimized on test
results [12]. The results are reported in Table 1. The experiments are limited due to the
memory requirements of the consistency method in our naive MATLAB implementation.
Table 1: Error rates (%) of minimum entropy (ME) vs. consistency method (CM), for
a = 0.23, nl = 50, and a) pure Gaussian clusters b) Gaussian clusters corrupted by outliers
c) class boundary separating one Gaussian cluster
nu
50
150
500
1500
a) ME 10.8 ? 1.5 9.8 ? 1.9
8.8 ? 2.0
8.3 ? 2.6
a) CM 21.4 ? 7.2 25.5 ? 8.1 29.6 ? 9.0 26.8 ? 7.2
b) ME
8.5 ? 0.9
8.3 ? 1.5
7.5 ? 1.5
6.6 ? 1.5
b) CM 22.0 ? 6.7 25.6 ? 7.4 29.8 ? 9.7 27.7 ? 6.8
c) ME
8.7 ? 0.8
8.3 ? 1.1
7.2 ? 1.0
7.2 ? 1.7
c) CM 51.6 ? 7.9 50.5 ? 4.0 49.3 ? 2.6 50.2 ? 2.2
The results are extremely poor for the consistency method, whose error is way above minimum entropy, and which does not show any sign of improvement as the sample of unlabeled data grows. Furthermore, when classes do not correspond to clusters, the consistency
method performs random class assignments. In fact, our setup, which was designed for
the comparison of global classifiers, is extremely defavorable to manifold methods, since
the data is truly 50-dimensional. In this situation, local methods suffer from the ?curse
of dimensionality?, and many more unlabeled examples would be required to get sensible
results. Hence, these results mainly illustrate that manifold learning is not the best choice
in semi-supervised learning for truly high dimensional data.
4.2
Facial Expression Recognition
We now consider an image recognition problem, consisting in recognizing seven (balanced)
classes corresponding to the universal emotions (anger, fear, disgust, joy, sadness, surprise
and neutral). The patterns are gray level images of frontal faces, with standardized positions. The data set comprises 375 such pictures made of 140 ? 100 pixels.
We tested kernelized logistic regression (Gaussian kernel), its minimum entropy version,
nearest neigbor and the consistency method. We repeatedly (10 times) sampled 1/10 of
the dataset for providing the labeled part, and the remainder for testing. Although (?, ? 2 )
were chosen to minimize the test error, the consistency method performed poorly with
63.8?1.3 % test error (compared to 86 % error for random assignments). Nearest-neighbor
get similar results with 63.1 ? 1.3 % test error, and Kernelized logistic regression (ignoring
unlabeled examples) improved to reach 53.6?1.3 %. Minimum entropy kernelized logistic
regression regression achieves 52.0 ? 1.9 % error (compared to about 20 % errors for
human on this database). The scale parameter chosen for kernelized logistic regression
(by ten-fold cross-validation) amount to use a global classifier. Again, the local methods
fail. This may be explained by the fact that the database contains several pictures of each
person, with different facial expressions. Hence, local methods are likely to pick the same
identity instead of the same expression, while global methods are able to learn the relevant
directions.
5
Discussion
We propose to tackle the semi-supervised learning problem in the supervised learning
framework by using the minimum entropy regularizer. This regularizer is motivated by theory, which shows that unlabeled examples are mostly beneficial when classes have small
overlap. The MAP framework provides a means to control the weight of unlabeled examples, and thus to depart from optimism when unlabeled data tend to harm classification.
Our proposal encompasses self-learning as a particular case, as minimizing entropy increases the confidence of the classifier output. It also approaches the solution of transductive large margin classifiers in another limiting case, as minimizing entropy is a means to
drive the decision boundary from learning examples.
The minimum entropy regularizer can be applied to both local and global classifiers. As a
result, it can improve over manifold learning when the dimensionality of data is effectively
high, that is, when data do not lie on a low-dimensional manifold. Also, our experiments
suggest that the minimum entropy regularization may be a serious contender to generative
models. It compares favorably to these mixture models in three situations: for small sample
sizes, where the generative model cannot completely benefit from the knowledge of the
correct joint model; when the joint distribution is (even slightly) misspecified; when the
unlabeled examples turn out to be non-informative regarding class probabilities.
References
[1] M. R. Amini and P. Gallinari. Semi-supervised logistic regression. In 15th European Conference on Artificial Intelligence, pages 390?394. IOS Press, 2002.
[2] J. O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer, New York, 2 edition,
1985.
[3] M. Brand. Structure learning in conditional probability models via an entropic prior and parameter extinction. Neural Computation, 11(5):1155?1182, 1999.
[4] V. Castelli and T. M. Cover. The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter. IEEE Trans. on Information Theory,
42(6):2102?2117, 1996.
[5] Y. Grandvalet. Logistic regression for partial labels. In 9th Information Processing and Management of Uncertainty in Knowledge-based Systems ? IPMU?02, pages 1935?1941, 2002.
[6] G. J. McLachlan. Discriminant analysis and statistical pattern recognition. Wiley, 1992.
[7] K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In Ninth
International Conference on Information and Knowledge Management, pages 86?93, 2000.
[8] K. Nigam, A. K. McCallum, S. Thrun, and T. Mitchell. Text classification from labeled and
unlabeled documents using EM. Machine learning, 39(2/3):135?167, 2000.
[9] T. J. O?Neill. Normal discrimination with unclassified observations. Journal of the American
Statistical Association, 73(364):821?826, 1978.
[10] M. Seeger. Learning with labeled and unlabeled data. Technical report, Institute for Adaptive
and Neural Computation, University of Edinburgh, 2002.
[11] M. Szummer and T. S. Jaakkola. Information regularization with partially labeled data. In
Advances in Neural Information Processing Systems 15. MIT Press, 2003.
[12] D. Zhou, O. Bousquet, T. Navin Lal, J. Weston, and B. Sch?olkopf. Learning with local and
global consistency. In Advances in Neural Information Processing Systems 16, 2004.
[13] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using Gaussian fields and
harmonic functions. In 20th Int. Conf. on Machine Learning, pages 912?919, 2003.
| 2740 |@word middle:1 version:2 extinction:1 covariance:4 pick:2 substitution:1 series:5 contains:1 tuned:1 document:1 exy:1 dx:1 stemming:1 numerical:1 partition:1 informative:6 enables:2 drop:1 plot:4 designed:1 joy:1 zik:4 discrimination:2 generative:13 v:5 intelligence:1 parameterization:1 mccallum:1 parametrization:1 egne:1 provides:7 consists:1 excellence:1 expected:1 indeed:2 beled:1 utc:2 little:1 curse:1 increasing:1 becomes:1 provided:3 estimating:1 xx:1 notation:1 bounded:1 what:2 cm:4 interpreted:1 minimizes:2 developed:1 whatsoever:1 pseudo:1 concave:2 tackle:1 universit:1 classifier:6 control:2 unit:3 gallinari:1 positive:2 dropped:1 local:8 limit:1 consequence:1 io:1 aiming:2 switching:1 analyzing:1 approximately:1 chose:2 plus:1 initialization:4 suggests:1 sadness:1 co:1 limited:2 averaged:2 testing:2 differs:1 area:1 universal:1 empirical:4 confidence:2 refers:2 subpopulation:1 suggest:1 get:2 cannot:1 unlabeled:54 close:2 risk:1 applying:1 context:3 influence:1 equivalent:1 map:3 imposed:1 missing:2 maximizing:2 deterministic:1 penalizer:1 attention:1 qc:1 formalized:1 abrupt:1 simplicity:1 pure:1 rule:3 attraction:1 regarded:1 spanned:1 classic:1 limiting:2 construction:1 recognition:4 distributional:1 labeled:20 database:2 observed:2 initializing:1 verifying:1 region:1 decrease:1 balanced:1 motivate:1 depend:1 upon:1 efficiency:1 learner:1 completely:1 joint:6 regularizer:5 derivation:1 separated:2 artificial:2 labeling:3 hyper:1 exhaustive:1 whose:2 encoded:2 otherwise:1 favor:3 statistic:2 transductive:3 h3c:1 validates:1 advantage:3 propose:2 unresolved:1 fr:1 remainder:1 grandval:1 relevant:2 mixing:1 poorly:1 olkopf:1 exploiting:1 cluster:10 requirement:1 help:2 illustrate:3 derive:1 montreal:1 stating:1 nearest:2 advocated:1 implies:1 indicate:1 differ:1 direction:1 drawback:1 correct:3 centered:1 human:2 generalization:1 extension:1 normal:4 exp:1 major:1 achieves:1 entropic:1 estimation:6 favorable:2 outperformed:1 applicable:1 label:13 sensitive:1 agrees:1 correctness:2 reflects:1 minimization:2 mclachlan:1 mit:1 clearly:1 j7:1 gaussian:10 modified:1 avoid:2 zhou:2 jaakkola:1 publication:1 encode:1 heudiasyc:1 derived:1 improvement:1 likelihood:14 check:1 fooled:1 mainly:1 seeger:2 benefitting:1 sense:1 posteriori:1 inference:1 dependent:2 cnrs:1 kernelized:4 france:1 pixel:1 iu:1 issue:2 classification:5 among:1 pascal:1 denoted:1 overall:2 equal:1 comprise:1 emotion:1 field:1 identical:2 represents:1 look:1 nearly:1 anger:1 yoshua:1 report:1 lighten:1 serious:1 few:1 fitness:1 consisting:1 imputes:1 recalling:1 montr:1 possibility:1 violation:1 mixture:14 extreme:1 analyzed:1 nl:12 truly:2 wiggly:1 regularizers:1 predefined:1 accurate:1 partial:1 facial:2 unless:1 plugged:1 fitted:1 instance:1 eal:1 modeling:4 cover:1 assignment:2 applicability:1 deviation:1 subset:1 neutral:1 predictor:1 srm:1 usefulness:1 recognizing:1 supervisor:2 too:1 learnability:1 reported:1 varies:1 corrupted:2 considerably:1 contender:1 confident:1 thanks:1 density:7 definitely:1 international:1 person:1 probabilistic:2 receiving:1 quickly:1 again:1 management:2 slowly:1 worse:2 conf:1 expert:1 american:1 de:1 includes:1 int:1 explicitly:1 performed:2 view:1 break:1 observing:1 sort:1 bayes:8 contribution:1 minimize:1 yves:1 ni:2 variance:2 yield:1 correspond:1 generalize:1 bayesian:3 raw:1 castelli:1 drive:1 classified:1 inform:1 reach:2 suffers:1 pp:2 steadily:1 thereof:1 modeler:1 sampled:2 dataset:1 mitchell:1 recall:1 knowledge:3 improves:2 dimensionality:2 actually:1 supervised:19 response:2 improved:2 evaluated:1 though:1 furthermore:2 stage:2 hand:5 navin:1 replacing:1 maximizer:2 defines:1 logistic:25 artifact:1 gray:1 grows:2 semisupervised:1 effect:2 multiplier:2 true:1 former:1 regularization:7 hence:4 during:1 self:4 uniquely:1 ambiguous:1 criterion:11 stress:1 crystal:1 complete:3 performs:1 image:2 harmonic:1 umontreal:1 misspecified:5 superior:1 common:1 behaves:1 functional:2 association:1 smoothness:1 fk:14 resorting:1 consistency:8 surface:3 posterior:1 optimizing:1 belongs:2 yi:2 seen:1 minimum:23 additional:1 recognized:1 paradigm:2 maximize:1 dashed:1 semi:12 infer:1 technical:1 plug:1 cross:3 prevented:1 controlled:1 regression:26 essentially:1 expectation:2 represent:1 kernel:1 proposal:2 whereas:2 addition:1 uninformative:1 addressed:2 annealing:1 sch:1 probably:1 cedex:1 tend:1 thing:1 lafferty:1 spirit:1 effectiveness:1 structural:1 near:1 bengio:1 easy:1 fit:2 zi:8 regarding:3 motivated:2 expression:3 optimism:1 ultimate:1 suffer:1 york:1 remark:1 matlab:1 repeatedly:1 dramatically:1 involve:1 tune:1 amount:2 ten:2 reduced:1 generate:1 dotted:3 sign:1 estimated:3 dummy:1 correctly:1 diagnosis:4 affected:3 ist:2 putting:1 terminology:2 drawn:1 prevent:1 graph:1 sum:1 year:1 missingness:1 compete:1 unclassified:1 parameterized:1 uncertainty:2 disgust:1 family:1 decision:10 guaranteed:1 dash:2 neill:1 fold:2 badly:1 occur:1 constraint:2 constrain:1 x2:1 bousquet:1 extremely:2 according:1 ball:1 poor:3 beneficial:3 smaller:1 em:12 slightly:3 appealing:1 making:1 deprived:1 outlier:3 invariant:1 handling:1 explained:1 interference:1 ln:10 overwhelm:1 turn:1 eventually:1 mechanism:1 fail:1 letting:1 available:2 gaussians:1 apply:1 enforce:1 amini:2 robustness:3 denotes:3 assumes:1 remaining:1 clustering:1 standardized:1 ghahramani:1 classical:1 unchanged:1 neigbor:1 depart:1 parametric:2 fulfillment:1 primary:1 guessing:1 detrimental:1 separate:1 separating:1 thrun:1 sensible:2 me:4 seven:1 manifold:8 extent:1 discriminant:1 iro:2 induction:1 assuming:1 besides:1 modeled:1 useless:1 relationship:1 providing:2 ratio:6 minimizing:2 berger:1 setup:4 mostly:3 statement:2 favorably:1 gk:8 implementation:1 unknown:1 perform:1 observation:2 benchmark:1 behave:2 displayed:1 situation:4 precise:2 misspecification:1 distracted:1 ninth:1 community:1 canada:1 pair:1 required:1 optimized:1 lal:1 learned:2 quadratically:1 nu:10 trans:1 able:1 suggested:1 below:1 pattern:6 usually:1 challenge:1 encompasses:1 memory:1 overlap:4 demanding:2 difficulty:1 circumvent:1 predicting:1 regularized:1 zhu:2 representing:1 scheme:2 improve:1 numerous:1 picture:2 naive:1 text:1 prior:12 asymptotic:2 relative:1 fully:1 expect:2 versus:1 validation:3 gather:1 basin:1 principle:5 grandvalet:2 compatible:1 summary:1 supported:1 last:2 free:2 side:4 bias:2 weaker:1 institute:1 neighbor:1 face:1 benefit:5 regard:1 boundary:4 dimension:1 edinburgh:1 author:2 made:3 adaptive:1 universally:1 avoided:2 programme:1 far:2 functionals:1 global:6 harm:1 conclude:1 xi:16 discriminative:2 factorize:1 continuous:1 iterative:1 quantifies:1 table:2 learn:2 zk:3 ca:1 ignoring:2 nigam:2 necessarily:2 complex:1 european:2 pk:2 linearly:1 edition:1 convey:3 ghani:1 x1:1 transduction:3 hemp:5 wiley:1 position:1 comprises:1 bengioy:1 lie:1 weighting:1 down:1 departs:1 bad:1 specific:1 discarding:1 showing:1 pac:1 decay:2 evidence:1 closeness:1 adding:1 effectively:1 gained:1 compi:1 illustrates:2 conditioned:1 margin:1 surprise:1 entropy:37 generalizing:1 smoothly:1 likely:2 lagrange:2 expressed:1 prevents:1 partially:1 fear:1 springer:1 aa:2 corresponds:1 ipmu:1 weston:1 conditional:12 goal:2 identity:1 fisher:1 content:2 change:1 man:1 partly:1 experimental:3 brand:1 support:1 latter:4 stressed:1 szummer:1 relevance:1 frontal:1 incorporate:1 dept:1 tested:1 ex:1 |
1,917 | 2,741 | Validity estimates for loopy Belief Propagation
on binary real-world networks
Joris Mooij
Dept. of Biophysics, Inst. for Neuroscience, Radboud Univ. Nijmegen
6525 EZ Nijmegen, the Netherlands
[email protected]
Hilbert J. Kappen
Dept. of Biophysics, Inst. for Neuroscience, Radboud Univ. Nijmegen
6525 EZ Nijmegen, the Netherlands
[email protected]
Abstract
We introduce a computationally efficient method to estimate the validity of the BP method as a function of graph topology, the connectivity strength, frustration and network size. We present numerical results
that demonstrate the correctness of our estimates for the uniform random
model and for a real-world network (?C. Elegans?). Although the method
is restricted to pair-wise interactions, no local evidence (zero ?biases?)
and binary variables, we believe that its predictions correctly capture the
limitations of BP for inference and MAP estimation on arbitrary graphical models. Using this approach, we find that BP always performs better
than MF. Especially for large networks with broad degree distributions
(such as scale-free networks) BP turns out to significantly outperform
MF.
1
Introduction
Loopy Belief Propagation (BP) [1] and its generalizations (such as the Cluster Variation
Method [2]) are powerful methods for inference and optimization. As is well-known, BP is
exact on trees, but also yields surprisingly good results for many other graphs that arise in
real-world applications [3, 4]. On the other hand, for densely connected graphs with high
interaction strengths the results can be quite bad or BP can simply fail to converge. Despite
the fact that BP is often used in applications nowadays, a good theoretical understanding of
its convergence properties and the quality of the approximation is still lacking (except for
the very special case of graphs with a single loop [5]).
In this article we attempt to answer the question in what way the quality of the BP results depends on the topology of the underlying graph (looking at structural properties such
as short cycles and large ?hubs?) and on the interaction potentials (i.e. strength and frustration). We do this for the special but interesting case of binary networks with symmetric
pairwise potentials (i.e. Boltzmann machines) without local evidence. This has the practical
advantage that analytical calculations are feasible and furthermore we believe that adding
local evidence will only serve to extend the domain of convergence, implying this to be the
worst-case scenario. We compare the results with those of the variational mean-field (MF)
method.
Real-world graphs are often far from uniformly random and possess structure such as clustering and power-law degree distributions [6]. Since we expect these structural features to
arise in many applications of BP, we focus in this article on graphs modeling this kind of
features. In particular, we consider Erd?os-R?enyi uniform random graphs [7], Bar?abasiAlbert ?scale-free? graphs [8], and the neural network of a widely studied worm, the
Caenorhabditis elegans.
This paper is organized as follows. In the next section we describe the class of graphical
models under investigation and explain our method to efficiently estimate the validity of
BP and MF. In section 3 we give a qualitative discussion of how the connectivity strength
and frustration generally govern the model behavior and discuss the relevant regimes of the
model parameters. We show for uniform random graphs that our validity estimates are in
very good agreement with the real behavior of the BP algorithm. In section 4 we study the
influence of graph topology. Thanks to the numerical efficiency of our estimation method
we are able to study very large (N ? 10000) networks, for which it would not be feasible
to simply run BP and look what happens. We also try our method on the neural network of
the worm C. Elegans and find almost perfect agreement of our predictions with observed
BP behavior. We conclude that BP is always better than MF and that the difference is
particularly striking for the case of large networks with broad degree distributions such as
scale-free graphs.
2
Model, paramagnetic solution and stability analysis
Let G = (V, B) be an undirected labelled graph without self-connections, defined by a
set of nodes V = {1, . . . , N } and a set of links B ? {(i, j) | 1 ? i < j ? N }. The
adjacency matrix corresponding to G is denoted M and defined as follows: Mij := 1 if
(ij) ? B or (ji) ? B and 0 otherwise. We denote the set of neighbors of node i ? V by
Ni := {j
P? V | (ij) ? B} and its degree by di := #(Ni ). We define the average degree
d := N1 i?V di and the maximum degree ? := maxi?V di .
To each node i we associate a binary random variable xi taking values in {?1, +1}. Let
W be a symmetric N ? N -matrix defining the strength of the links between the nodes. The
probability distribution over configurations x = (x1 , . . . , xN ) is given by
P(x) :=
1 Y 1 Mij Wij xi xj
1 Y Wij xi xj
e
=
e2
Z
Z
(ij)?B
(1)
i,j?V
with Z a normalization constant. We will take the weight matrix W to be random, with
i.i.d. entries {Wij }1?i<j?N distributed according to the Gaussian law with mean J0 and
variance J 2 .
For this model, instead of using the single-node and pair-wise beliefs bi (xi ) resp.
bij (xi , xj ), it turns out to be more convenient to use the (equivalent) quantities m :=
{mi }i?V and ? := {?ij }(ij)?B , defined by:
mi := bi (+1) ? bi (?1);
?ij := bij (+1, +1) ? bij (+1, ?1) ? bij (?1, +1) + bij (?1, ?1).
We will use these throughout this paper. We call the mi magnetizations; note that the
expectation values E xi vanish because of the symmetry in the probability distribution (1).
As is well-known [2, 9], fixed points of BP correspond to stationary points of the Bethe
free energy, which is in this case given by
N
X
X
X 1 + mi xi
FBe (m, ?) := ?
Wij ?ij +
(1 ? di )
?
2
xi =?1
i=1
(ij)?B
X
X
1 + mi xi + mj xj + xi xj ?ij
+
?
4
x ,x =?1
(ij)?B
i
j
with ?(x) := xP
log x. Note that with this parameterization all normalization and overlap
constraints (i.e. xj bij (xi , xj ) = bi (xi )) are satisfied by construction [10]. We can minimize the Bethe free energy analytically by setting its derivatives to zero; one then immediately sees that a possible solution of the resulting equations is the paramagnetic1 solution:
mi = 0 and ?ij = tanh Wij (for (ij) ? B). For this solution to be a minimum (instead of
a saddle point or maximum), the Hessian of FBe at that point should be positive-definite.
This condition turns out to be equivalent to the following Bethe stability matrix
!
X ?2
?ij
ik
(ABe )ij := ?ij 1 +
(with ?ij = tanh Wij )
(2)
? Mij
2
2
1 ? ?ik
1 ? ?ij
k?Ni
being positive-definite. Whether this is the case obviously depends on the values of the
weights Wij and the adjacency matrix M . Since for zero weights (W = 0), the stability
matrix is just the identity matrix, the paramagnetic solution is a minimum of the Bethe free
energy for small values of the weights Wij . The question of what ?small? exactly means
in terms of J and J0 and how this relates to the graph topology will be taken on in the next
two sections.
First we discuss the situation for the mean-field variational method. The mean-field free
energy FM F (m) only depends on m; we can set its derivatives to zero, which again yields
the paramagnetic solution m = 0. The corresponding stability matrix (equal to the Hessian) is given by
(AM F )ij := ?ij ? Wij Mij
and should be positive-definite for the paramagnetic solution to be stable. One can prove
[11] that ABe is positive-definite whenever AM F is positive-definite. Since the exact magnetizations are zero, we conclude that the Bethe approximation is better than the mean-field
approximation for all possible choices of the weights W . As we will see later on, this difference can become quite large for large networks.
3
Weight dependence
The behavior of the graphical model depends critically on the parameters J0 and J. Taking
the graph topology to be uniformly random (see also subsection 4.1) we recover the model
known in the statistical physics community as the Viana-Bray model [12], which has been
thoroughly studied and is quite well-understood. In the limit N ? ?, there are different
relevant regimes (?phases?) for the parameters J and J0 to be distinguished (cf. Fig. 1):
? The paramagnetic phase, where the magnetizations all vanish (m = 0), valid for
J and J0 both small.
? The ferromagnetic phase, where two configurations (characterized by all magnetizations being either positive or negative) each get half of the probability mass.
This is the phase occurring for large J0 .
1
Throughout this article, we will use terminology from statistical physics if there is no good
corresponding terminology in the field of machine learning available.
BP convergence behavior
Stability m=0 minimum Bethe free energy
0.4
0.4
m=0 stable
(spin?glass phase)
no convergence
0.3
?
0.3
J
J
marginal instability
0.2
0
convergence
to ferromagnetic
solutions
convergence
to m=0
0.1
0
0.02
0.04
0.06
J0
(a)
0.08
0.2
m=0 stable
(paramagnetic
phase)
0.1
0
0.1
(b)
0
0.02
m=0 instable
(ferromagnetic
phase)
0.04
0.06
0.08
0.1
J0
Figure 1: Empirical regime boundaries for the ER graph model with N = 100 and d = 20,
averaged over three instances; expectation values are shown as thick black lines, standarddeviations are indicated by the gray areas. See the main text for additional explanation.
The exact location of the boundary between the spin-glass and ferromagnetic phase in the
right-hand plot (indicated by the dashed line) was not calculated. The red dash-dotted line
shows the stability boundary for MF.
? The spin-glass phase where the probability mass is distributed over exponentially
(in N ) many different configurations. This phase occurs for frustrated weights,
i.e. for large J.
Consider now the right-hand plot in Fig. 1. Here we have plotted the different regimes concerning the stability of the paramagnetic solution of the Bethe approximation.2 We find that
the m = 0 solution is indeed stable for J and J0 small and becomes unstable at some point
when J0 increases. This signals the paramagnetic-ferromagnetic phase transition. The location is in good agreement with the known phase boundary found for the N ? ? limit
by advanced statistical physics methods as we show in more detail in [11]. For comparison
we have also plotted the stability boundary for MF (the red dash-dotted line). Clearly, the
mean-field approximation breaks down much earlier than the Bethe approximation and is
unable to capture the phase transitions occurring for large connectivity strengths.
The boundary between the spin-glass phase and the paramagnetic phase is more subtle.
What happens is that the Bethe stability matrix becomes marginally stable at some point
when we increase J, i.e. the minimum eigenvalue of ABe approaches zero (in the limit
N ? ?). This means that the Bethe free energy becomes very flat at that point. If we go
on increasing J, the m = 0 solution becomes stable again (in other words, the minimum
eigenvalue of the stability matrix ABe becomes positive again). We interpret the marginal
instability as signalling the onset of the spin-glass phase. Indeed it coincides with the
known phase boundary for the Viana-Bray model [11, 12]. We observe a similar marginal
instability for other graph topologies.
Now consider the left-hand plot, Fig. 1(a). It shows the convergence behavior of the BP algorithm, which was determined by running BP with a fixed number of maximum iterations
and slight damping. The messages were initialized randomly. We find different regimes
that are separated by the boundaries shown in the plot. For small J and J0 , BP converges
to m = 0. For J0 large enough, BP converges to one of the two ferromagnetic solutions
2
Although in Fig. 1 we show only one particular graph topology, the general appearance of these
plots does not differ much for other graph topologies, especially for large N . The scale of the plots
mostly depends on the network size N and the average degree d as we will show in the next section.
Jc d
1/2
Mean Field
Bethe
2
2
1.5
1.5
1
1
0.5
0.5
0
0
10
100
1000
10000
10
N
100
1000
10000
N
Figure 2: Critical values for Bethe and MF for different graph topologies (: ER, M: BA)
in the dense?limit with d = 0.1N as a function of network size. Note that the y-axis is
rescaled by d.
(which one is determined by the random initial conditions). For large J, BP does not converge within 1000 iterations, indicating a complex probability distribution. The boundaries
coincide within statistical precision with those in the right-hand plot which were obtained
by the stability analysis.
The computation time necessary for producing a plot such as Fig. 1(a), showing the convergence behavior of BP, quickly increases with increasing N . The computation time needed
for the stability analysis (Fig. 1(b)), which amounts to calculating the minimal eigenvalue
of the N ? N stability matrix, is much less, allowing us to investigate the behavior of BP
for large networks.
4
Graph topology
In this section we will concentrate on the frustrated case, more precisely on the case J0 =
0 (i.e. the y-axis in the regime diagrams) and study the location of the Bethe marginal
instability and of the MF instability for various graph topologies as a function of network
size N and average degree d. We will denote by JcBe the critical value of J at which the
Bethe paramagnetic solution becomes marginally unstable and we will refer to this as the
Bethe critical value. The critical value of J where the MF solution becomes unstable will
be denoted as JcM F and referred to as the MF critical value.
In studying the influence of graph topology for large networks, we have to distinguish two
cases, which we call the dense and sparse limits. In the dense limit, we let N ? ? and
scale the average degree as d = cN for some fixed constant c. In this limit, we find that the
influence of the graph topology is almost negligible. For all graph topologies that we have
considered, we find the following asymptotic behavior for the critical values:
1
JcBe ? ? ,
d
1
JcM F ? ?
2 d
The constant of proportionality is approximately 1. These results are illustrated in Fig. 2
for two different graph topologies that will be discussed in more detail below.
In the sparse limit, we let N ? ? but keep d fixed. In that case the resulting critical values
show significant dependence on the graph topology as we will see.
4.1
Uniform random graphs (ER)
The first and most elementary random graph model we will consider was introduced and
studied by Erd?os and R?enyi [7]. The ensemble, which we denote as ER(N, p), consists of
0.5
Bethe Jc
Jc
0.4
1/d1/2
MF Jc
0.3
0.2
1/(2?1/2)
0.1
0
10
100
1000
10000
N
Figure 3: Critical values for Bethe and MF for Erd?os-R?enyi uniform random graphs with
average degree d = 10.
the graphs with N nodes; links are added between each pair of nodes independently with
probability p. The resulting graphs have a degree distribution that is approximately Poisson
for large N and the expected average degree is E d = p(N ? 1). As was mentioned before,
the resulting graphical model is known in the statistical physics literature as the Viana-Bray
model (with zero ?external field?).
Fig. 3 shows the results for the sparse limit, where p is chosen such that the expected average degree is fixed to d = 10. The Bethe?
critical value JcBe appears to be independent of
network size and is slightly larger than 1/ d.?The MF critical ?
value JcM F does depend on
network size (it looks to be proportional to 1/ ? instead of 1/ d); in fact it can be proven
that it converges very slowly to 0 as N ? ? [11], implying that the MF approximation
breaks down for very large ER networks in the sparse limit. Although this is an interesting
result, one could say that for all practical purposes the MF critical value JcM F is nearly
independent of network size N for uniform random graphs.
4.2
Scale-free graphs (BA)
A phenomenon often observed in real-world networks is that the degree distribution behaves like a power-law, i.e. the number of nodes with degree ? is proportional to ? ?? for
some ? > 0. These graphs are also known as ?scale-free? graphs. The first random graph
model exhibiting this behavior is from Barab?asi and Albert [8].
We will consider a slightly different model, which we will denote by BA(N, m). It is
defined as a stochastic process, yielding graphs with more and more nodes as time goes
on. At t = 0 one starts with the graph consisting of m nodes and no links. At each time
step, one node is added; it is connected with m different already existing nodes, attaching
preferably to nodes with higher degree (?rich get richer?). More specifically, we take the
probability to connect to a node of degree ? to be proportional to ? + 1. The degree distribution turns out to have a power-law dependence for N ? ? with exponent ? = 3. In
Fig. 4 we illustrate some BA graphs. The difference between the maximum degree ? and
the average degree d is rather large: whereas
? the average degree d converges to 2m, the
maximum degree ? is known to scale as N .
Fig. 5 shows the results of the stability analysis
for BA graphs with average degree d =
?
10. Note that the ?
y-axis is rescaled by ? to show that the MF critical value JcM F is
proportional to 1/ ?. The ?
Bethe critical
? values are seen to have a scaling behavior that
lies somewhere between 1/ d and 1/ ?. Compared to the situation for uniform ER
graphs, BP now even more significantly outperforms MF. The relatively low sensitivity to
the maximum degree ? that BP exhibits here can be understood intuitively since BA graphs
resemble forests of sparsely interconnected stars of high degree, on which BP is exact.
4.3
C. Elegans
We have also applied our stability analysis on the neural network of the worm C. Elegans,
that is publicly available on http://elegans.swmed.edu/. This graph has N =
202 and d = 19.4. We have calculated the ferromagnetic (J = 0) transition and spin-glass
(J0 = 0) transition. We also calculated the critical value of J where BP stops converging,
and the value of J where BP does not find the paramagnetic solution anymore. The results
are shown in Table 1. Note the very good agreement for the Bethe critical value and the
critical J where BP stops finding the m = 0 solution. These results show the accuracy of
our method of estimating BP validity on real-world networks.
Table 1: Critical values and BP boundaries for C. Elegans network.
MF critical value
Bethe critical value
BP m = 0 boundary
BP convergence boundary
5
Spin-glass
0.0927 ? 0.0023
0.197 ? 0.016
0.194 ? 0.014
0.209 ? 0.027
Ferromagnetic
0.0387
0.0406
0.0400
>1
Conclusions
We have introduced a computationally efficient method to estimate the validity of BP as a
function of graph topology, the connectivity strength, frustration and network size. Using
this approach, we have found that:
? for any graph, the Bethe approximation is valid for a larger set of connectivity
strengths Wij than the mean-field approximation;
? for uniform random graphs, the quality of both the MF approximation and the
Bethe
? approximation is determined by the average degree of the network (Jc ?
1/ d for the spin-glass transition) and is nearly independent of network size;
? for scale-free networks the validity of the MF approximation scales very poorly
with network size due to the increase of the maximal degree (?rich get richer?). In
contrast, the validity of the BP approximation scales very well with network size.
This is in agreement with our intuition that these networks resemble a forest of
high degree stars (?hubs?) that are sparsely interconnected and the fact that BP is
exact on stars.
? In the limit in which the graph size N ? ? and the average degree d scales
proportional to N , the influence of ?
the graph-topological details on the location of
the spin-glass transition (at J ? 1/ d) diminishes and becomes largely irrelevant.
m=1
m=2
m=3
Figure 4: Bar?abasi-Albert graphs for N = 20.
1/2
Jc ?
6
5
4
3
2
1
0
Bethe
1/d1/2
MF
10
100
1000
10000
N
Figure 5: Critical values for Bethe and MF for BA?scale-free random graphs with average
degree d = 10. Note that the y-axis is rescaled by ?.
Acknowledgments
The research reported here is part of the Interactive Collaborative Information Systems
(ICIS) project, supported by the Dutch Ministry of Economic Affairs, grant BSIK03024.
References
[1] J. Pearl. Probabilistic Reasoning in Intelligent systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, CA, 1988.
[2] J. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. In Advances in Neural
Information Processing Systems, volume 13, pages 689?695, 2001.
[3] K. Murphy, Y. Weiss, and M. Jordan. Loopy belief propagation for approximate inference: an
empirical study. In Proc. of the Conf. on Uncertainty in AI, pages 467?475, 1999.
[4] B. Frey and D. MacKay. A revolution: Belief propagation in graphs with cycles. In Advances
in Neural Information Processing Systems, volume 10, pages 479?485, 1997.
[5] Y. Weiss. Correctness of local probability propagation in graphical models with loops. Neur.
Comp., 12:1?41, 2000.
[6] R. Albert and A.-L. Barab?asi. Statistical mechanics of complex networks. Rev. Mod. Phys.,
74:47?97, 2002.
[7] P. Erd?os and A. R?enyi. On random graphs i. Publ. Math. Debrecen, 6:290?291, 1959.
[8] A.-L. Barab?asi and R. Albert. Emergence of scaling in random networks. Science, 286:509?
512, 1999.
[9] T. Heskes. Stable fixed points of loopy belief propagation are local minima of the bethe free
energy. In Advances in Neural Information Processing Systems, volume 15, pages 343?350,
2003.
[10] M. Welling and Y.W. Teh. Belief optimization for binary networks: a stable alternative to loopy
belief propagation. In Proc. of the Conf. on Uncertainty in AI, volume 17, 2001.
[11] J.M. Mooij and H.J. Kappen. Spin-glass phase transitions on real-world graphs. preprint, condmat:0408378, 2004.
[12] L. Viana and A. Bray. Phase diagrams for dilute spin glasses. J. Phys. C: Solid State Phys.,
18:3037?3051, 1985.
| 2741 |@word proportionality:1 solid:1 kappen:3 initial:1 configuration:3 icis:1 outperforms:1 existing:1 paramagnetic:11 numerical:2 plot:8 implying:2 stationary:1 half:1 parameterization:1 signalling:1 fbe:2 affair:1 short:1 math:1 node:14 location:4 become:1 ik:2 qualitative:1 prove:1 consists:1 introduce:1 pairwise:1 expected:2 indeed:2 behavior:11 mechanic:1 freeman:1 increasing:2 becomes:8 project:1 estimating:1 underlying:1 mass:2 what:4 kind:1 finding:1 preferably:1 interactive:1 exactly:1 grant:1 producing:1 positive:7 negligible:1 understood:2 local:5 before:1 frey:1 limit:11 despite:1 approximately:2 black:1 studied:3 bi:4 averaged:1 practical:2 acknowledgment:1 definite:5 j0:14 area:1 empirical:2 asi:3 significantly:2 convenient:1 word:1 get:3 influence:4 instability:5 equivalent:2 map:1 go:2 independently:1 immediately:1 stability:15 variation:1 resp:1 construction:1 exact:5 agreement:5 associate:1 particularly:1 sparsely:2 observed:2 preprint:1 capture:2 worst:1 ferromagnetic:8 connected:2 cycle:2 rescaled:3 mentioned:1 intuition:1 govern:1 depend:1 serve:1 efficiency:1 various:1 univ:2 enyi:4 separated:1 describe:1 radboud:2 quite:3 richer:2 widely:1 larger:2 plausible:1 say:1 otherwise:1 emergence:1 obviously:1 advantage:1 eigenvalue:3 analytical:1 interaction:3 instable:1 interconnected:2 maximal:1 caenorhabditis:1 relevant:2 loop:2 poorly:1 convergence:9 cluster:1 perfect:1 converges:4 illustrate:1 ij:19 resemble:2 differ:1 concentrate:1 exhibiting:1 thick:1 stochastic:1 adjacency:2 generalization:1 investigation:1 elementary:1 considered:1 purpose:1 estimation:2 diminishes:1 proc:2 tanh:2 correctness:2 clearly:1 always:2 gaussian:1 rather:1 focus:1 contrast:1 am:2 glass:11 inst:2 inference:4 wij:10 denoted:2 exponent:1 special:2 mackay:1 marginal:4 field:9 equal:1 broad:2 look:2 nearly:2 intelligent:1 randomly:1 densely:1 murphy:1 phase:19 consisting:1 n1:1 attempt:1 message:1 investigate:1 nl:2 yielding:1 nowadays:1 necessary:1 damping:1 tree:1 initialized:1 plotted:2 theoretical:1 minimal:1 instance:1 modeling:1 earlier:1 loopy:5 entry:1 uniform:8 reported:1 connect:1 answer:1 thoroughly:1 thanks:1 sensitivity:1 probabilistic:1 physic:4 quickly:1 connectivity:5 again:3 frustration:4 satisfied:1 slowly:1 external:1 conf:2 derivative:2 potential:2 star:3 jc:6 depends:5 onset:1 later:1 try:1 break:2 red:2 start:1 recover:1 collaborative:1 minimize:1 ni:3 spin:11 publicly:1 variance:1 largely:1 efficiently:1 ensemble:1 yield:2 correspond:1 kaufmann:1 accuracy:1 critically:1 marginally:2 comp:1 explain:1 phys:3 whenever:1 energy:7 e2:1 di:4 mi:6 stop:2 subsection:1 hilbert:1 organized:1 subtle:1 appears:1 higher:1 condmat:1 wei:3 erd:4 furthermore:1 just:1 hand:5 o:4 propagation:8 quality:3 gray:1 indicated:2 believe:2 validity:8 analytically:1 symmetric:2 illustrated:1 self:1 coincides:1 swmed:1 generalized:1 demonstrate:1 magnetization:4 performs:1 reasoning:1 wise:2 variational:2 behaves:1 ji:1 exponentially:1 volume:4 extend:1 discussed:1 slight:1 interpret:1 refer:1 significant:1 ai:2 heskes:1 stable:8 irrelevant:1 scenario:1 binary:5 seen:1 minimum:6 additional:1 ministry:1 morgan:1 converge:2 dashed:1 signal:1 relates:1 characterized:1 calculation:1 concerning:1 biophysics:2 barab:3 prediction:2 converging:1 expectation:2 poisson:1 albert:4 iteration:2 normalization:2 dutch:1 whereas:1 diagram:2 posse:1 undirected:1 elegans:7 mod:1 jordan:1 call:2 structural:2 enough:1 xj:7 topology:17 fm:1 economic:1 cn:1 whether:1 hessian:2 generally:1 netherlands:2 amount:1 http:1 outperform:1 dotted:2 neuroscience:2 correctly:1 terminology:2 graph:54 run:1 powerful:1 uncertainty:2 striking:1 viana:4 almost:2 throughout:2 scaling:2 dash:2 distinguish:1 topological:1 strength:8 bray:4 dilute:1 constraint:1 precisely:1 bp:37 flat:1 relatively:1 according:1 neur:1 slightly:2 rev:1 happens:2 intuitively:1 restricted:1 taken:1 computationally:2 equation:1 turn:4 discus:2 fail:1 needed:1 studying:1 available:2 yedidia:1 observe:1 distinguished:1 anymore:1 alternative:1 clustering:1 cf:1 running:1 graphical:5 calculating:1 joris:1 somewhere:1 especially:2 question:2 quantity:1 occurs:1 added:2 already:1 dependence:3 exhibit:1 link:4 unable:1 unstable:3 ru:2 mostly:1 nijmegen:4 negative:1 ba:7 publ:1 boltzmann:1 allowing:1 teh:1 defining:1 situation:2 looking:1 arbitrary:1 abe:4 community:1 introduced:2 pair:3 connection:1 pearl:1 able:1 bar:2 below:1 regime:6 explanation:1 belief:9 power:3 overlap:1 critical:20 advanced:1 axis:4 text:1 understanding:1 literature:1 mooij:3 asymptotic:1 law:4 lacking:1 expect:1 interesting:2 limitation:1 proportional:5 proven:1 degree:30 xp:1 article:3 jcm:5 surprisingly:1 supported:1 free:14 bias:1 neighbor:1 taking:2 attaching:1 sparse:4 distributed:2 boundary:12 calculated:3 xn:1 world:7 valid:2 transition:7 rich:2 coincide:1 san:1 far:1 welling:1 approximate:1 keep:1 conclude:2 francisco:1 xi:12 table:2 bethe:26 mj:1 ca:1 symmetry:1 forest:2 complex:2 domain:1 main:1 dense:3 arise:2 x1:1 fig:10 referred:1 precision:1 debrecen:1 lie:1 vanish:2 bij:6 down:2 bad:1 hub:2 showing:1 er:6 maxi:1 revolution:1 evidence:3 adding:1 occurring:2 mf:23 simply:2 saddle:1 appearance:1 ez:2 mij:4 frustrated:2 identity:1 labelled:1 feasible:2 determined:3 except:1 uniformly:2 specifically:1 worm:3 indicating:1 dept:2 d1:2 phenomenon:1 |
1,918 | 2,742 | Incremental Algorithms
for Hierarchical Classification?
Nicol`o Cesa-Bianchi
Universit`a di Milano
Milano, Italy
Claudio Gentile
Universit`a dell?Insubria
Varese, Italy
Andrea Tironi Luca Zaniboni
Universit`a di Milano
Crema, Italy
Abstract
We study the problem of hierarchical classification when labels corresponding to partial and/or multiple paths in the underlying taxonomy are
allowed. We introduce a new hierarchical loss function, the H-loss, implementing the simple intuition that additional mistakes in the subtree of
a mistaken class should not be charged for. Based on a probabilistic data
model introduced in earlier work, we derive the Bayes-optimal classifier
for the H-loss. We then empirically compare two incremental approximations of the Bayes-optimal classifier with a flat SVM classifier and
with classifiers obtained by using hierarchical versions of the Perceptron
and SVM algorithms. The experiments show that our simplest incremental approximation of the Bayes-optimal classifier performs, after just one
training epoch, nearly as well as the hierarchical SVM classifier (which
performs best). For the same incremental algorithm we also derive an
H-loss bound showing, when data are generated by our probabilistic data
model, exponentially fast convergence to the H-loss of the hierarchical
classifier based on the true model parameters.
1
Introduction and basic definitions
We study the problem of classifying data in a given taxonomy of labels, where the taxonomy is specified as a tree forest. We assume that every data instance is labelled with a
(possibly empty) set of class labels called multilabel, with the only requirement that multilabels including some node i in the taxonony must also include all ancestors of i. Thus,
each multilabel corresponds to the union of one or more paths in the forest, where each
path must start from a root but it can terminate on an internal node (rather than a leaf).
Learning algorithms for hierarchical classification have been investigated in, e.g., [8, 9, 10,
11, 12, 14, 15, 17, 20]. However, the scenario where labelling includes multiple and partial
paths has received very little attention. The analysis in [5], which is mainly theoretical,
shows in the multiple and partial path case a 0/1-loss bound for a hierarchical learning
algorithm based on regularized least-squares estimates.
In this work we extend [5] in several ways. First, we introduce a new hierarchical loss function, the H-loss, which is better suited than the 0/1-loss to analyze hierarchical classification
tasks, and we derive the corresponding Bayes-optimal classifier under the parametric data
model introduced in [5]. Second, considering various loss functions, including the H-loss,
we empirically compare the performance of the following three incremental kernel-based
?
This work was supported in part by the PASCAL Network of Excellence under EC grant no.
506778. This publication only reflects the authors? views.
algorithms: 1) a hierarchical version of the classical Perceptron algorithm [16]; 2) an approximation to the Bayes-optimal classifier; 3) a simplified variant of this approximation.
Finally, we show that, assuming data are indeed generated according to the parametric
model mentioned before, the H-loss of the algorithm in 3) converges to the H-loss of the
classifier based on the true model parameters. Our incremental algorithms are based on
training linear-threshold classifiers in each node of the taxonomy. A similar approach has
been studied in [8], though their model does not consider multiple-path classifications as
we do.
Incremental algorithms are the main focus of this research, since we strongly believe that
they are a key tool for coping with tasks where large quantities of data items are generated
and the classification system needs to be frequently adjusted to keep up with new items.
However, we found it useful to provide a reference point for our empirical results. Thus we
have also included in our experiments the results achieved by nonincremental algorithms.
In particular, we have chosen a flat and a hierarchical version of SVM [21, 7, 19], which
are known to perform well on the textual datasets considered here.
We assume data elements are encoded as real vectors x ? Rd which we call instances.
A multilabel for an instance x is any subset of the set {1, . . . , N } of all labels/classes,
including the empty set. We denote the multilabel associated with x by a vector y =
(y1 , . . . , yN ) ? {0, 1}N , where i belongs to the multilabel of x if and only if yi = 1.
A taxonomy G is a forest whose trees are defined over the set of labels. A multilabel
y ? {0, 1}N is said to respect a taxonomy G if and only if y is the union of one or more
paths in G, where each path starts from a root but need not terminate on a leaf. See Figure 1.
We assume the data-generating mechanism produces examples (x, y) such that y respects
some fixed underlying taxonomy G with N nodes. The set of roots in G is denoted by
root(G). We use par(i) to denote the unique parent of node i, anc(i) to denote the set of
ancestors of i, and sub(i) to denote the set of nodes in the subtree rooted at i (including i).
Finally, given a predicate ? over a set ?, we will use {?} to denote both the subset of ?
where ? is true and the indicator function of this subset.
2
The H-loss
Though several hierarchical losses have been proposed in the literature (e.g., in [11, 20]), no
one has emerged as a standard yet. Since hierarchical losses are defined over multilabels,
we start by considering two very simple functions measuring the discrepancy between mulb = (b
tilabels y
y1 , ..., ybN ) and y = (y1 , ..., yN ): the 0/1-loss `0/1 (b
y , y) = {?i : ybi 6= yi }
and the symmetric difference loss `? (b
y , y) = {b
y1 6= y1 } + . . . + {b
yN 6= yN }.
There are several ways of making these losses depend on a given taxonomy G. In this
work, we follow the intuition ?if a mistake is made at node i, then further mistakes made
in the subtree rooted at i are unimportant?. That is, we do not require the algorithm be able
to make fine-grained distinctions on tasks when it is unable to make coarse-grained ones.
For example, if an algorithm failed to label a document with the class SPORTS, then the
algorithm should not be charged more loss because it also failed to label the same document with the subclass SOCCER and the sub-subclass CHAMPIONS LEAGUE. A function
implementing this intuition is defined by
PN
`H (b
y , y) = i=1 ci {b
yi 6= yi ? ybj = yj , j ? anc(i)},
where c1 , . . . , cN > 0 are fixed cost coefficients. This loss, which we call H-loss, can
also be described as follows: all paths in G from a root down to a leaf are examined and,
whenever we encounter a node i such that ybi 6= yi , we add ci to the loss, whereas all the
loss contributions in the subtree rooted at i are discarded. Note that if c1 = . . . = cN = 1
then `0/1 ? `H ? `? . Choices of ci depending on the structure of G are proposed in
b ? {0, 1}N define its G-truncation as the multilabel y 0 =
Section 4. Given a multilabel y
0
(y10 , ..., yN
) ? {0, 1}N where, for each i = 1, . . . , N , yi0 = 1 iff ybi = 1 and ybj = 1 for all
j ? anc(i). Note that the G-truncation of any multilabel always respects G. A graphical
(a)
(b)
(c)
(d)
Figure 1: A one-tree forest (repeated four times). Each node corresponds to a class in the
taxonomy G, hence in this case N = 12. Gray nodes are included in the multilabel under
consideration, white nodes are not. (a) A generic multilabel which does not respect G; (b)
its G-truncation. (c) A second multilabel that respects G. (d) Superposition of multilabel
(b) on multilabel (c): Only the checked nodes contribute to the H-loss between (b) and (c).
representation of the notions introduced so far is given in Figure 1. In the next lemma we
show that whenever y respects G, then `H (b
y , y) cannot be smaller than `H (y 0 , y). In other
words, when the multilabel y to be predicted respects a taxonomy G then there is no loss
of generality in restricting to predictions which respect G.
b ? {0, 1}N be two multilabels such that y respects
Lemma 1 Let G be a taxonomy, y, y
b . Then `H (y 0 , y) ? `H (b
G, and y 0 be the G-truncation of y
y , y) .
Proof. For each i = 1, . . . , N we show that yi0 6= yi and yj0 = yj for all j ? anc(i) implies
ybi 6= yi and ybj = yj for all j ? anc(i). Pick some i and suppose yi0 6= yi and yj0 = yj for
all j ? anc(i). Now suppose yj0 = 0 (and thus yj = 0) for some j ? anc(i). Then yi = 0
since y respects G. But this implies yi0 = 1, contradicting the fact that the G-truncation
y 0 respects G. Therefore, it must be the case that yj0 = yj = 1 for all j ? anc(i). Hence
b left each node j ? anc(i) unchanged, implying ybj = yj for all
the G-truncation of y
b does not change the value of a node i whose
j ? anc(i). But, since the G-truncation of y
ancestors j are such that ybj = 1, this also implies ybi = yi0 . Therefore ybi 6= yi and the proof
is concluded.
3
A probabilistic data model
Our learning algorithms are based on the following statistical model for the data, originally
introduced in [5]. The model defines a probability distribution fG over the set of multilabels
respecting a given taxonomy G by associating with each node i of G a Bernoulli random
variable Yi and defining
QN
fG (y | x) = i=1 P Yi = yi | Ypar(i) = ypar(i) , X = x .
N
To guarantee that fG (y | x) =
0 whenever y ? {0, 1} does not respect G, we set
P Yi = 1 | Ypar(i) = 0, X = x = 0. Notice that this definition of fG makes the (rather
simplistic) assumption that all Yk with the same parent node i (i.e., the children of i)
are independent when conditioned on Yi and x. Through fG we specify an i.i.d. process
{(X 1 , Y 1 ), (X 2 , Y 2 ), . . .}, where, for t = 1, 2, . . ., the multilabel Y t is distributed according to fG (? | X t ) and X t is distributed according to a fixed and unknown distribution
D. Each example (xt , y t ) is thus a realization of the corresponding pair (X t , Y t ) of random variables. Our parametric model for fG is described as follows. First, we assume that
the support of D is the surface of the d-dimensional unit sphere (i.e., instances x ? R d are
such that ||x|| = 1). With each node i in the taxonomy, we associate a unit-norm weight
vector ui ? Rd . Then, we define the conditional probabilities for a nonroot node i with
parent j by P (Yi = 1 | Yj = 1, X = x) = (1 + u>
i x)/2. If i is a root node, the previous
equation simplifies to P (Yi = 1 | X = x) = (1 + u>
i x)/2.
3.1
The Bayes-optimal classifier for the H-loss
We now describe a classifier, called H - BAYES, that is the Bayes-optimal classifier for
the H-loss. In other words, H - BAYES classifies any instance x with the multilabel
b = argminy? ?{0,1} E[`H (?
y
y , Y ) | x ]. Define pi (x) = P Yi = 1 | Ypar(i) = 1, X = x .
When no ambiguity arises, we write pi instead of pi (x). Now, fix any unit-length instance
b be a multilabel that respects G. For each node i in G, recursively define
x and let y
P
H i,x (b
y ) = ci (pi (1 ? ybi ) + (1 ? pi )b
yi ) + k?child(i) H k,x (b
y) .
The classifier H - BAYES operates as follows. It starts by putting all nodes of G in a set S;
nodes are then removed from S one by one. A node i can be removed only if i is a leaf or
if all nodes j in the subtree rooted at i have been already removed. When i is removed, its
value ybi is set to 1 if and only
if P
pi 2 ? k?child(i) H k,x (b
y )/ci ? 1 .
(1)
(Note that if i is a leaf then (1) is equivalent to ybi = {pi ? 1/2}.) If ybi is set to zero, then
all nodes in the subtree rooted at i are set to zero.
Theorem 2 For any taxonomy G and all unit-length x ? Rd , the multilabel generated by
H - BAYES is the Bayes-optimal classification of x for the H-loss.
b be the multilabel assigned by H - BAYES and y ? be any multilabel
Proof sketch. Let y
minimizing the expected H-loss. Introducing the short-hand Ex [?] = E[? | x], we can write
PN
Q
Ex `H (b
y , Y ) = i=1 ci (pi (1 ? ybi ) + (1 ? pi )b
yi ) j?anc(i) pj {b
yj = 1} .
Note that we can recursively decompose the expected H-loss as
P
Ex `H (b
y , Y ) = i?root(G) Ex Hi (b
y , Y ),
where
Y
X
Ex Hi (b
y , Y ) = ci (pi (1 ? ybi ) + (1 ? pi )b
yi )
pj {b
yj = 1} +
Ex Hk (b
y , Y ) . (2)
j?anc(i)
k?child(i)
Pick a node i. If i is a leaf, then the sum in the RHS of (2) disappears and yi? = {pi ? 1/2},
which is also the minimizer of H i,x (b
y ) = ci (pi (1 ? ybi ) + (1 ? pi )b
yi ), implying ybi = yi? .
Now let i be anQinternal node and inductively assume ybj = ybj? for all j ? sub(i). Notice
that the factors j?anc(i) pj {b
yj = 1} occur in both terms in the RHS of (2). Hence yi? does
not depend on these factors and we can equivalently minimize
P
ci (pi (1 ? ybi ) + (1 ? pi )b
yi ) + pi {b
yi = 1} k?child(i) H k,x (b
y ),
(3)
where we noted that, for each k ? child(i),
Q
Ex Hk (b
y, Y ) =
yj = 1} pi {b
yi = 1}H k,x (b
y) .
j?anc(i) pj {b
Now observe that yi? minimizing (3) is equivalent to the assignment produced by H - BAYES.
To conclude the proof, note that whenever yi? = 0, Lemma 1 requires that yj? = 0 for all
nodes j ? sub(i), which is exactly what H - BAYES does.
4
The algorithms
We consider three incremental algorithms. Each one of these algorithms learns a hierarchical classifier by training a decision function gi : Rd ? {0, 1} at each node i = 1, . . . , N .
For a given set g1 , . . . , gN of decision functions, the hierarchical classifier generated by
b = (b
these algorithms classifies an instance x through a multilabel y
y1 , ..., ybN ) defined as
follows:
ybi =
gi (x)
0
if i ? root(G) or ybj = 1 for all j ? anc(i)
otherwise.
(4)
b computed this way respects G. The classifiers (4) are trained incrementally.
Note that y
Let gi,t be the decision function at node i after training on the first t ? 1 examples. When
bt
the next training example (xt , y t ) is available, the algorithms compute the multilabel y
using classifier (4) based on g1,t (xt ), . . . , gN,t (xt ). Then, the algorithms consider for
an update only those decision functions sitting at nodes i satisfying either i ? root(G)
or ypar(i),t = 1. We call such nodes eligible at time t. The decision functions of all
other nodes are left unchanged. The first algorithm we consider is a simple hierarchical
version of the Perceptron algorithm [16], which we call H - PERC. The decision functions at
time t are defined by gi,t (xt ) = {w>
i,t xt ? 0}. In the update phase, the Perceptron rule
wi,t+1 = wi,t + yi,t xt is applied to every node i eligible at time t and such that ybi,t 6= yi,t .
The second algorithm, called APPROX - H - BAYES, approximates the H - BAYES classifier of
Section 3.1 by replacing the unknown quantities pi (xt ) with estimates (1+w >
i,t xt )/2. The
weights w i,t are regularized least-squares estimates defined by
(i)
>
?1
wi,t = (I + Si,t?1 Si,t?1
+ xt x>
Si,t?1 y t?1 .
t )
(5)
The columns of the matrix Si,t?1 are all past instances xs that have been stored at node i;
(i)
the s-th component of vector y t?1 is the i-th component yi,s of the multilabel y s associated
with instance xs . In the update phase, an instance xt is stored
pat node i, causing an update
of wi,t , whenever i is eligible at time t and |w >
(5 ln t)/Ni,t , where Ni,t is
i,t xt | ?
the number of instances stored at node i up to time t ? 1. The corresponding decision
functions gi,t are of the form gi,t (xt ) = {w >
i,t xt ? ?i,t }, where the threshold ?i,t ? 0 at
>
node i depends on the margin values w j,t xt achieved by nodes j ? sub(i) ? recall (1).
Note that gi,t is notpa linear-threshold function, as xt appears in the definition of w i,t . The
margin threshold (5 ln t)/Ni,t , controlling the update of node i at time t, reduces the
space requirements of the classifier by keeping matrices Si,t suitably small. This threshold
is motivated by the work [4] on selective sampling.
The third algorithm, which we call H - RLS (Hierarchical Regularized Least Squares), is a
simplified variant of APPROX - H - BAYES in which the thresholds ?i,t are set to zero. That
is, we have gi,t (xt ) = {w >
i,t xt ? 0} where the weights w i,t are defined as in (5) and
updated as in the APPROX - H - BAYES algorithm. Details on how to run APPROX - H - BAYES
2
and H - RLS in dual variables and perform an update at node i in time O(Ni,t
) are found
in [3] (where a mistake-driven version of H - RLS is analyzed).
5
Experimental results
The empirical evaluation of the algorithms was carried out on two well-known datasets of
free-text documents. The first dataset consists of the first (in chronological order) 100,000
newswire stories from the Reuters Corpus Volume 1, RCV1 [2]. The associated taxonomy
of labels, which are the topics of the documents, has 101 nodes organized in a forest of
4 trees. The forest is shallow: the longest path has length 3 and the the distribution of
nodes, sorted by increasing path length, is {0.04, 0.53, 0.42, 0.01}. For this dataset, we
used the bag-of-words vectorization performed by Xerox Research Center Europe within
the EC project KerMIT (see [4] for details on preprocessing). The 100,000 documents
were divided into 5 equally sized groups of chronologically consecutive documents. We
then used each adjacent pair of groups as training and test set in an experiment (here the
fifth and first group are considered adjacent), and then averaged the test set performance
over the 5 experiments.
The second dataset is a specific subtree of the OHSUMED corpus of medical abstracts [1]:
the subtree rooted in ?Quality of Health Care? (MeSH code N05.715). After removing
overlapping classes (OHSUMED is not quite a tree but a DAG), we ended up with 94
Table 1: Experimental results on two hierarchical text classification tasks under various loss
functions. We report average test errors along with standard deviations (in parenthesis). In
bold are the best performance figures among the incremental algorithms.
RCV1
PERC
H - PERC
H - RLS
AH - BAY
SVM
H - SVM
OHSU.
PERC
H - PERC
H - RLS
AH - BAY
SVM
H - SVM
0/1-loss
0.702(?0.045)
0.655(?0.040)
0.456(?0.010)
0.550(?0.010)
0.482(?0.009)
0.440(?0.008)
unif. H-loss
1.196(?0.127)
1.224(?0.114)
0.743(?0.026)
0.815(?0.028)
0.790(?0.023)
0.712(?0.021)
norm. H-loss
0.100(?0.029)
0.099(?0.028)
0.057(?0.001)
0.090(?0.001)
0.057(?0.001)
0.055(?0.001)
?-loss
1.695(?0.182)
1.861(?0.172)
1.086(?0.036)
1.465(?0.040)
1.173(?0.051)
1.050(?0.027)
0/1-loss
0.899(?0.024)
0.846(?0.024)
0.769(?0.004)
0.819(?0.004)
0.784(?0.003)
0.759(?0.002)
unif. H-loss
1.938(?0.219)
1.560(?0.155)
1.200(?0.007)
1.197(?0.006)
1.206(?0.003)
1.170(?0.005)
norm. H-loss
0.058(?0.005)
0.057(?0.005)
0.045(?0.000)
0.047(?0.000)
0.044(?0.000)
0.044(?0.000)
?-loss
2.639(?0.226)
2.528(?0.251)
1.957(?0.011)
2.029(?0.009)
1.872(?0.005)
1.910(?0.007)
classes and 55,503 documents. We made this choice based only on the structure of the
subtree: the longest path has length 4, the distribution of nodes sorted by increasing path
length is {0.26, 0.37, 0.22, 0.12, 0.03}, and there are a significant number of partial and
multiple path multilabels. The vectorization of the subtree was carried out as follows: after
tokenization, we removed all stopwords and also those words that did not occur at least 3
times in the corpus. Then, we vectorized the documents using the Bow library [13] with a
log(1 + TF) log(IDF) encoding. We ran 5 experiments by randomly splitting the corpus in a
training set of 40,000 documents and a test set of 15,503 documents. Test set performances
are averages over these 5 experiments. In the training set we kept more documents than
in the RCV1 splits since the OHSUMED corpus turned out to be a harder classification
problem than RCV1. In both datasets instances have been normalized to unit length. We
tested the hierarchical Perceptron algorithm (H - PERC), the hierarchical regularized leastsquares algorithm (H - RLS), and the approximated Bayes-optimal algorithm (APPROX - H BAYES ), all described in Section 4. The results are summarized in Table 1. APPROX - H BAYES ( AH - BAY in Table 1) was trained using cost coefficients c i chosen as follows: if
i ? root(G) then ci = |root(G)|?1 . Otherwise, ci = cj /|child(j)|, where j is the parent
of i. Note that this choice of coefficients amounts to splitting a unit cost equally among
the roots and then splitting recursively each node?s cost equally among its children. Since,
in this case, 0 ? `H ? 1, we call the resulting loss normalized H-loss. We also tested
a hierarchical version of SVM (denoted by H - SVM in Table 1) in which each node is an
SVM classifier trained using a batch version of our hierarchical learning protocol. More
precisely, each node i was trained only on those examples (xt , y t ) such that ypar(i),t = 1
(note that, as no conditions are imposed on yi,t , node i is actually trained on both positive
and negative examples). The resulting set of linear-threshold functions was then evaluated
on the test set using the hierachical classification scheme (4). We tried both the C and ?
parametrizations [18] for SVM and found the setting C = 1 to work best for our data. 1
We finally tested the ?flat? variants of Perceptron and SVM, denoted by PERC and SVM. In
these variants, each node is trained and evaluated independently of the others, disregarding
all taxonomical information. All SVM experiments were carried out using the libSVM
implementation [6]. All the tested algorithms used a linear kernel.
1
It should be emphasized that this tuning of C was actually chosen in hindsight, with no crossvalidation.
As far as loss functions are concerned, we considered the 0/1-loss, the H-loss with cost coefficients set to 1 (denoted by uniform H-loss), the normalized H-loss, and the symmetric
difference loss (denoted by ?-loss). Note that H - SVM performs best, but our incremental
algorithms were trained for a single epoch on the training set. The good performance of
SVM (the flat variant of H - SVM ) is surprising. However, with a single epoch of training
H - RLS does not perform worse than SVM (except on OHSUMED under the normalized
H-loss) and comes reasonably close to H - SVM. On the other hand, the performance of
APPROX - H - BAYES is disappointing: on OHSUMED it is the best algorithm only for the
uniform H-loss, though it was trained using the normalized H-loss; on RCV1 it never outperforms H - RLS, though it always does better than PERC and H - PERC. A possible explanation for this behavior is that APPROX - H - BAYES is very sensitive to errors in the estimates
of pi (x) (recall Section 3.1). Indeed, the least-squares estimates (5), which we used to
approximate H - BAYES, seem to work better in practice on simpler (and possibly more robust) algorithms, such as H - RLS. The lower values of normalized H-loss on OHSUMED
(a harder corpus than RCV1) can be explained because a quarter of the 94 nodes in the
OHSUMED taxonomy are roots, and thus each top-level mistake is only charged about
4/94. As a final remark, we observe that the normalized H-loss gave too small a range of
values to afford fine comparisons among the best performing algorithms.
6
Regret bounds for the H-loss
In this section we prove a theoretical bound on the H-loss of a slight variant of the algorithm
H - RLS tested in Section 5. More precisely, we assume data are generated according to
the probabilistic model introduced in Section 3 with unknown instance distribution D and
b
unknown coefficients u1 , . . . , uN . We define the regret of a classifier assigning label y
to instance X as E `H (b
y , Y t ) ? E `H (y, Y ), where the expected value is with respect the
random draw of (X, Y ) and y is the multilabel assigned by classifier (4) when the decision
functions gi are zero-threshold functions of the form gi (x) = {u>
i x ? 0}. The theorem
below shows that the regret of the classifier learned by a variant of H - RLS after t training
examples, with t large enough, is exponentially small in t. In other words, H - RLS learns
to classify as well as the algorithm that is given the true parameters u1 , . . . , uN of the
underlying data-generating process. We have been able to prove the theorem only for the
variant of H - RLS storing all instances at eachp
node. That is, every eligible node at time t is
>
updated, irrespective of whether |w i,t xt | ? (5 ln t)/Ni,t .
Given the i.i.d. data-generating process (X 1 , Y 1 ), (X 2 , Y 2 ), . . ., for each node k we define the derived process X k1 , X k2 , . . . including all and only the instances X s of the original process that satisfy Ypar(k),s = 1. We call this derived process the process at node k.
Note that, for each k, the process at node k is an i.i.d. process. However, its distribution
might depend on k. The spectrum of the process at node k is the set of eigenvalues of the
correlation matrix with entries E[Xk1 ,i Xk1 ,j ] for i, j = 1, . . . , d. We have the following
theorem, whose proof is omitted due to space limitations.
Theorem 3 Let G be a taxonomy with N nodes and let fG be a joint density for G
parametrized by N unit-norm vectors u1 , . . . , uN ? Rd . Assume the instance
distri
bution is such that there exist ?1 , .n. . , ?N > 0 satisfying P |u>
X
|
?
?
=
1 for
t
i
i
o
i = 1, . . . , N . Then, for all t > max maxi=1,...,N
16
? i ? i ?i ,
maxi=1,...,N
192d
?i ?2i
the regret
most
E `H (b
y t , Y t ) ? E `H (y t , Y t ) of the modified H - RLS algorithm is at !
N
h
i X
X
2
2
?i t e??1 ?i ?i t ?i + t2 e??2 ?i t ?i
cj ,
i=1
hQ
j?sub(i)
>
where ?1 , ?2 are constants, ?i = E
j?anc(i) (1 + uj X)/2
eigenvalue in the spectrum of the process at node i.
i
and ?i is the smallest
7
Conclusions and open problems
In this work we have studied the problem of hierarchical classification of data instances
in the presence of partial and multiple path labellings. We have introduced a new hierarchical loss function, the H-loss, derived the corresponding Bayes-optimal classifier, and
empirically compared an incremental approximation to this classifier with some other incremental and nonincremental algorithms. Finally, we have derived a theoretical guarantee
on the H-loss of a simplified variant of the approximated Bayes-optimal algorithm.
Our investigation leaves several open issues. The current approximation to the Bayesoptimal classifier is not satisfying, and this could be due to a bad choice of the model, of
the estimators, of the datasets, or of a combination of them. Also, the normalized H-loss
is not fully satisfying, since the resulting values are often too small. From the theoretical
viewpoint, we would like to analyze the regret of our algorithms with respect to the Bayesoptimal classifier, rather than with respect to a classifier that makes a suboptimal use of the
true model parameters.
References
[1] The OHSUMED test collection. URL: medir.ohsu.edu/pub/ohsumed/.
[2] Reuters corpus volume 1. URL: about.reuters.com/researchandstandards/corpus/.
[3] N. Cesa-Bianchi, A. Conconi, and C. Gentile. A second-order Perceptron algorithm. In Proc.
15th COLT, pages 121?137. Springer, 2002.
[4] N. Cesa-Bianchi, A. Conconi, and C. Gentile. Learning probabilistic linear-threshold classifiers
via selective sampling. In Proc. 16th COLT, pages 373?386. Springer, 2003.
[5] N. Cesa-Bianchi, A. Conconi, and C. Gentile. Regret bounds for hierarchical classification with
linear-threshold functions. In Proc. 17th COLT. Springer, 2004. To appear.
[6] C.-C. Chang and C.-J. Lin. Libsvm ? a library for support vector machines. URL:
www.csie.ntu.edu.tw/?cjlin/libsvm/.
[7] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge
University Press, 2001.
[8] O. Dekel, J. Keshet, and Y. Singer. Large margin hierarchical classification. In Proc. 21st ICML.
Omnipress, 2004.
[9] S.T. Dumais and H. Chen. Hierarchical classification of web content. In Proc. 23rd ACM Int.
Conf. RDIR, pages 256?263. ACM Press, 2000.
[10] M. Granitzer. Hierarchical Text Classification using Methods from Machine Learning. PhD
thesis, Graz University of Technology, 2003.
[11] T. Hofmann, L. Cai, and M. Ciaramita. Learning with taxonomies: Classifying documents and
words. In NIPS Workshop on Syntax, Semantics, and Statistics, 2003.
[12] D. Koller and M. Sahami. Hierarchically classifying documents using very few words. In Proc.
14th ICML, Morgan Kaufmann, 1997.
[13] A. McCallum. Bow: A toolkit for statistical language modeling, text retrieval, classification
and clustering. URL: www-2.cs.cmu.edu/?mccallum/bow/.
[14] A.K. McCallum, R. Rosenfeld, T.M. Mitchell, and A.Y. Ng. Improving text classification by
shrinkage in a hierarchy of classes. In Proc. 15th ICML. Morgan Kaufmann, 1998.
[15] D. Mladenic. Turning yahoo into an automatic web-page classifier. In Proceedings of the 13th
European Conference on Artificial Intelligence, pages 473?474, 1998.
[16] F. Rosenblatt. The Perceptron: A probabilistic model for information storage and organization
in the brain. Psychol. Review, 65:386?408, 1958.
[17] M.E. Ruiz and P. Srinivasan. Hierarchical text categorization using neural networks. Information Retrieval, 5(1):87?118, 2002.
[18] B. Sch?olkopf, A. J. Smola, R. C. Williamson, and P. L. Bartlett. New support vector algorithms.
Neural Computation, 12:1207?1245, 2000.
[19] B. Sch?olkopf and A. Smola. Learning with kernels. MIT Press, 2002.
[20] A. Sun and E.-P. Lim. Hierarchical text classification and evaluation. In Proc. 2001 Int. Conf.
Data Mining, pages 521?528. IEEE Press, 2001.
[21] V.N. Vapnik. Statistical Learning Theory. Wiley, 1998.
| 2742 |@word version:7 norm:4 yi0:5 suitably:1 dekel:1 unif:2 open:2 tried:1 pick:2 harder:2 recursively:3 pub:1 document:13 past:1 outperforms:1 current:1 com:1 surprising:1 si:5 yet:1 assigning:1 must:3 mesh:1 hofmann:1 update:6 implying:2 intelligence:1 leaf:7 item:2 mccallum:3 short:1 coarse:1 node:58 contribute:1 simpler:1 dell:1 stopwords:1 along:1 consists:1 prove:2 introduce:2 excellence:1 expected:3 indeed:2 behavior:1 andrea:1 frequently:1 brain:1 little:1 ohsumed:9 considering:2 increasing:2 project:1 classifies:2 underlying:3 distri:1 what:1 hindsight:1 ended:1 guarantee:2 every:3 subclass:2 chronological:1 exactly:1 universit:3 classifier:32 k2:1 unit:7 grant:1 medical:1 yn:5 appear:1 before:1 positive:1 crema:1 mistake:5 encoding:1 path:15 might:1 studied:2 examined:1 range:1 averaged:1 unique:1 yj:13 union:2 practice:1 regret:6 coping:1 empirical:2 word:7 hierachical:1 cannot:1 close:1 storage:1 www:2 equivalent:2 imposed:1 charged:3 center:1 attention:1 independently:1 splitting:3 rule:1 estimator:1 insubria:1 notion:1 updated:2 controlling:1 yj0:4 suppose:2 hierarchy:1 associate:1 element:1 satisfying:4 approximated:2 csie:1 graz:1 sun:1 removed:5 yk:1 mentioned:1 intuition:3 ran:1 respecting:1 ui:1 inductively:1 cristianini:1 multilabel:25 trained:8 depend:3 joint:1 various:2 fast:1 describe:1 kermit:1 artificial:1 whose:3 encoded:1 emerged:1 quite:1 otherwise:2 eachp:1 statistic:1 gi:10 g1:2 rosenfeld:1 final:1 eigenvalue:2 cai:1 causing:1 turned:1 realization:1 bow:3 parametrizations:1 iff:1 olkopf:2 crossvalidation:1 n05:1 convergence:1 empty:2 requirement:2 parent:4 produce:1 generating:3 incremental:12 converges:1 categorization:1 derive:3 depending:1 received:1 predicted:1 c:1 implies:3 come:1 milano:3 implementing:2 require:1 fix:1 decompose:1 investigation:1 ntu:1 leastsquares:1 adjusted:1 considered:3 consecutive:1 smallest:1 omitted:1 proc:8 bag:1 label:9 superposition:1 sensitive:1 champion:1 tf:1 tool:1 reflects:1 mit:1 always:2 modified:1 rather:3 pn:2 claudio:1 shrinkage:1 publication:1 derived:4 focus:1 longest:2 bernoulli:1 mainly:1 hk:2 bt:1 koller:1 ancestor:3 selective:2 semantics:1 issue:1 classification:18 dual:1 pascal:1 denoted:5 among:4 colt:3 yahoo:1 tokenization:1 never:1 ng:1 sampling:2 rls:14 nearly:1 icml:3 discrepancy:1 t2:1 report:1 others:1 few:1 randomly:1 ypar:7 phase:2 organization:1 mining:1 evaluation:2 mladenic:1 analyzed:1 partial:5 researchandstandards:1 tree:5 taylor:1 theoretical:4 instance:18 column:1 earlier:1 classify:1 gn:2 modeling:1 measuring:1 assignment:1 cost:5 introducing:1 deviation:1 subset:3 entry:1 uniform:2 predicate:1 too:2 stored:3 dumais:1 st:1 density:1 probabilistic:6 thesis:1 ambiguity:1 cesa:4 possibly:2 perc:9 worse:1 conf:2 bold:1 summarized:1 includes:1 coefficient:5 int:2 satisfy:1 depends:1 performed:1 root:13 view:1 analyze:2 bution:1 start:4 bayes:28 contribution:1 minimize:1 square:4 ni:5 kaufmann:2 sitting:1 ybi:17 produced:1 ah:3 whenever:5 checked:1 definition:3 associated:3 di:2 proof:5 dataset:3 mitchell:1 recall:2 lim:1 organized:1 cj:2 actually:2 appears:1 originally:1 follow:1 specify:1 evaluated:2 though:4 strongly:1 generality:1 just:1 xk1:2 smola:2 correlation:1 hand:2 sketch:1 web:2 replacing:1 overlapping:1 incrementally:1 defines:1 quality:1 gray:1 believe:1 normalized:8 true:5 hence:3 assigned:2 symmetric:2 white:1 adjacent:2 rooted:6 noted:1 soccer:1 syntax:1 performs:3 omnipress:1 consideration:1 argminy:1 quarter:1 empirically:3 exponentially:2 volume:2 extend:1 slight:1 approximates:1 significant:1 cambridge:1 dag:1 mistaken:1 rd:6 approx:8 league:1 tuning:1 automatic:1 newswire:1 shawe:1 language:1 toolkit:1 europe:1 surface:1 add:1 italy:3 belongs:1 driven:1 disappointing:1 scenario:1 zaniboni:1 yi:34 morgan:2 gentile:4 additional:1 care:1 multiple:6 reduces:1 sphere:1 lin:1 retrieval:2 luca:1 divided:1 equally:3 parenthesis:1 prediction:1 variant:9 basic:1 simplistic:1 cmu:1 kernel:3 achieved:2 c1:2 whereas:1 fine:2 concluded:1 sch:2 seem:1 call:7 presence:1 split:1 enough:1 concerned:1 gave:1 associating:1 suboptimal:1 simplifies:1 cn:2 whether:1 motivated:1 bartlett:1 url:4 afford:1 remark:1 useful:1 unimportant:1 amount:1 simplest:1 exist:1 notice:2 rosenblatt:1 write:2 srinivasan:1 group:3 key:1 four:1 putting:1 threshold:10 varese:1 pj:4 libsvm:3 kept:1 y10:1 chronologically:1 sum:1 run:1 eligible:4 draw:1 decision:8 bound:5 hi:2 occur:2 precisely:2 idf:1 flat:4 u1:3 rcv1:6 performing:1 according:4 xerox:1 combination:1 smaller:1 wi:4 shallow:1 labellings:1 making:1 tw:1 explained:1 ybn:2 equation:1 ln:3 mechanism:1 cjlin:1 singer:1 sahami:1 available:1 observe:2 hierarchical:31 generic:1 batch:1 encounter:1 original:1 top:1 clustering:1 include:1 graphical:1 k1:1 uj:1 classical:1 unchanged:2 already:1 quantity:2 parametric:3 said:1 hq:1 unable:1 parametrized:1 topic:1 assuming:1 length:7 code:1 ciaramita:1 minimizing:2 equivalently:1 taxonomy:18 negative:1 implementation:1 unknown:4 perform:3 bianchi:4 datasets:4 discarded:1 pat:1 defining:1 y1:6 introduced:6 pair:2 specified:1 distinction:1 textual:1 learned:1 nip:1 able:2 below:1 including:5 max:1 explanation:1 regularized:4 indicator:1 turning:1 scheme:1 technology:1 library:2 disappears:1 irrespective:1 carried:3 psychol:1 health:1 text:7 epoch:3 literature:1 review:1 nicol:1 loss:60 par:1 fully:1 limitation:1 nonincremental:2 vectorized:1 viewpoint:1 story:1 classifying:3 pi:20 storing:1 supported:1 truncation:7 keeping:1 free:1 perceptron:8 fifth:1 fg:8 distributed:2 qn:1 author:1 made:3 collection:1 preprocessing:1 simplified:3 ec:2 far:2 approximate:1 keep:1 corpus:8 conclude:1 spectrum:2 un:3 vectorization:2 bay:3 table:4 terminate:2 reasonably:1 robust:1 forest:6 improving:1 anc:16 williamson:1 investigated:1 european:1 protocol:1 did:1 main:1 ybj:8 hierarchically:1 rh:2 reuters:3 contradicting:1 allowed:1 repeated:1 child:8 wiley:1 sub:6 third:1 learns:2 grained:2 ruiz:1 down:1 theorem:5 removing:1 bad:1 xt:20 specific:1 emphasized:1 showing:1 maxi:2 x:2 svm:20 disregarding:1 workshop:1 restricting:1 bayesoptimal:2 vapnik:1 ci:11 keshet:1 phd:1 subtree:10 labelling:1 conditioned:1 margin:3 chen:1 suited:1 failed:2 conconi:3 sport:1 chang:1 springer:3 corresponds:2 minimizer:1 acm:2 conditional:1 sorted:2 sized:1 labelled:1 content:1 change:1 included:2 except:1 operates:1 lemma:3 called:3 experimental:2 internal:1 support:4 arises:1 multilabels:5 ohsu:2 tested:5 ex:7 |
1,919 | 2,743 | Support Vector Classification with Input Data
Uncertainty
Jinbo Bi
Computer-Aided Diagnosis & Therapy Group
Siemens Medical Solutions, Inc.
Malvern, PA 19355
[email protected]
Tong Zhang
IBM T. J. Watson Research Center
Yorktown Heights, NY 10598
[email protected]
Abstract
This paper investigates a new learning model in which the input data
is corrupted with noise. We present a general statistical framework to
tackle this problem. Based on the statistical reasoning, we propose a
novel formulation of support vector classification, which allows uncertainty in input data. We derive an intuitive geometric interpretation of
the proposed formulation, and develop algorithms to efficiently solve it.
Empirical results are included to show that the newly formed method is
superior to the standard SVM for problems with noisy input.
1
Introduction
In the traditional formulation of supervised learning, we seek a predictor that maps input
x to output y. The predictor is constructed from a set of training examples {(xi , yi )}. A
hidden underlying assumption is that errors are confined to the output y. That is, the input
data are not corrupted with noise; or even when noise is present in the data, its effect is
ignored in the learning formulation.
However, for many applications, this assumption is unrealistic. Sampling errors, modeling
errors and instrument errors may preclude the possibility of knowing the input data exactly.
For example, in the problem of classifying sentences from speech recognition outputs for
call-routing applications, the speech recognition system may make errors so that the observed text is corrupted with noise. In image classification applications, some features
may rely on image processing outputs that introduce errors. Hence classification problems
based on the observed text or image features have noisy inputs. Moreover, many systems
can provide estimates for the reliability of their outputs, which measure how uncertain each
element of the outputs is. This confidence information, typically ignored in the traditional
learning formulations, can be useful and should be considered in the learning formulation.
A plausible approach for dealing with noisy input is to use the standard learning formulation without modeling the underlying input uncertainty. If we assume that the same noise
is observed both in the training data and in the test data, then the noise will cause similar
effects in the training and testing phases. Based on this (non-rigorous) reasoning, one can
argue that the issue of input noise may be ignored. However, we show in this paper that by
modeling input uncertainty, we can obtain more accurate predictors.
2
Statistical models for prediction problems with uncertain input
Consider (xi , yi ), where xi is corrupted with noise. Let x0i be the original uncorrupted
input. We consider the following data generating process: first (x0i , yi ) is generated according to a distribution p(x0i , yi |?), where ? is an unknown parameter that should be estimated from the data; next, given (x0i , yi ), we assume that xi is generated from x0i (but
independent of yi ) according to a distribution p(xi |?0 , ?i , x0i ), where ? 0 is another possibly
unknown parameter, and ?i is a known parameter which is our estimate of the uncertainty
(e.g. variance) for xi . The joint probability of (x0i , xi , yi ) can be written as:
p(x0i , xi , yi ) = p(x0i , yi |?)p(xi |?0 , ?i , x0i ).
The joint probability of (xi , yi ) is obtained by integrating out the unobserved quantity x0i :
Z
p(xi , yi ) = p(x0i , yi |?)p(xi |?0 , ?i , x0i )dx0i .
This model can be considered as a mixture model where each mixture component corresponds to a possible true input x0i not observed. In this framework, the unknown parameter
(?, ? 0 ) can be estimated from the data using the maximum-likelihood estimate as:
X
X Z
0
max
ln
p(x
,
y
|?,
?
)
=
max
ln p(x0i , yi |?)p(xi |?0 , ?i , x0i )dx0i .
(1)
i i
0
0
?,?
?,?
i
i
Although this is a principled approach under our data generation process, due to the integration over the unknown true input x0i , it often leads to a very complicated formulation
which is difficult to solve. Moreover, it is not straight-forward to extend the method to nonprobability formulations such as support vector machines. Therefore we shall consider an
alternative that is computationally more tractable and easier to generalize. The method we
employ in this paper can be regarded as an approximation to (1), often used in engineering
applications as a heuristics for mixture estimation. In this method, we simply regard each
x0i as a parameter of the probability model, so the maximum-likelihood becomes:
X
max
ln sup[p(x0i , yi |?)p(xi |?0 , ?i , x0i )].
(2)
0
?,?
i
x0i
If our probability model is correctly specified, then (1) is the preferred formulation. However in practice, we may not know the exact p(xi |?0 , ?i , x0i ) (for example, we may not
be able to estimate the level of uncertainty ?i accurately). Therefore in practice, under
mis-specified probability models, (1) is not necessarily always a better method.
Intuitively (1) and (2) have similar
effects since large values of p(x0i , yi |?)p(xi |?0 , ?i , x0i )
R
0
dominate the summation in p(xi , yi |?)p(xi |?0 , ?i , x0i )dx0i . That is, both methods prefer
a parameter configuration such that the product p(x0i , yi |?)p(xi |?0 , ?i , x0i ) is large for some
x0i . If an observation xi is contaminated with large noise so that p(xi |?0 , ?i , x0i ) has a flat
shape, then we can pick a x0i that is very different from xi which predicts yi well. On the
other hand, if an observation xi is contaminated with very small noise, then (1) and (2)
penalize a parameter ? such that p(xi , yi |?) is small. This has the effect of ignoring data
that are very uncertain and relying on data that are less contaminated.
In the literature, there are two types of statistical models: generative models and discriminative models (conditional models). We focus on discriminative modeling in this paper
since it usually leads to better prediction performance. In discriminative modeling, we
assume that p(x0i , yi |?) has a form p(x0i , yi |?) = p(x0i )p(yi |?, x0i ).
As an example, we consider regression problems with Gaussian noise:
(?T x0i ? yi )2
kxi ? x0i k2
0
0
0
0
p(xi , yi |?) ? p(xi ) exp ?
, p(xi |? , ?i , xi ) ? exp ?
.
2? 2
2?i2
The method in (2) becomes
? = arg min
?
X
i
inf0
xi
(?T x0i ? yi )2
kxi ? x0i k2
+
.
2? 2
2?i2
(3)
This formulation is closely related (but not identical) to the so-called total least squares
(TLS) method [6, 5]. The motivation for total least squares is the same as what we consider in this paper: input data are contaminated with noise. Unlike the statistical modeling
approach we adopted in this section, the total least squares algorithm is derived from a
numerical computation point of view. The resulting formulation is similar to (3), but its
solution can be conveniently described by a matrix SVD decomposition. The method has
been widely applied in engineering applications, and is known to give better performance
than the standard least squares method for problems with uncertain inputs. In our framework, we can regard (3) as the underlying statistical model for total least squares.
For binary classification where yi ? {?1}, we consider logistic conditional probability
model for yi , while still assume Gaussian noise in the input:
1
kxi ? x0i k2
0
0
0
0
p(xi , yi |?) ? p(xi )
, p(xi |? , ?i , xi ) ? exp ?
.
1 + exp(?? T x0i yi )
2?i2
Similar to the total least squares method (3), we obtain the following formulation from (2):
X
kxi ? x0i k2
?? T x0i yi
? = arg min
inf0 ln(1 + e
)+
.
(4)
?
xi
2?i2
i
A well-known disadvantage of logistic model for binary classification is that it does not
model deterministic conditional probability (that is, p(y = 1|x) = 0, 1) very well. This
problem can be remedied using the support vector machine formulation, which has attractive intuitive geometric interpretations for linearly separable problems. Although in this
section a statistical modeling approach is used to gain useful insights, we will focus on
support vector machines in the rest of the paper.
3
Total support vector classification
Our formulation of support vector classification with uncertain input data is motivated by
the total least squares regression method that can be derived from the statistical model (3).
We thus call the proposed algorithm total support vector classification (TSVC) algorithm.
We assume that inputs are subject to an additive noise, i.e., x0i = xi + ?xi where noise
?xi follows certain distribution. Bounded and ellipsoidal uncertainties are often discussed
in the TLS context [7], and resulting approaches find many real-life applications. Hence
instead of assuming Gaussian noise as in (3) and (4), we consider a simple bounded uncertainty model k?xi k ? ?i with uniform priors. The bound ?i has a similar effect of
the standard deviation ?i in the Gaussian noise model. However, under the bounded uncertainty model, the squared penalty term ||xi ? x0i ||2 /2?i2 is replaced by a constraint
k?xi k ? ?i . Another reason for us to use the bounded uncertainty noise model is that the
resulting formulation has a more intuitive geometric interpretation (see Section 4).
SVMs construct classifiers based on separating hyperplanes {x : w T x+b = 0}. Hence the
parameter ? in (3) and (4) is replaced by a weight vector w and a bias b. In the separable
case, TSVC solves the following problem:
min
w,b,?xi ,i=1,??? ,`
subject to
1
2
2 kwk
yi wT (xi
+ ?xi ) + b ? 1, k?xi k ? ?i , i = 1, ? ? ? , `.
(5)
For non-separable problems, we follow the standard practice of introducing slack variables
?i , one for each data point. In the resulting formulation, we simply replace the square loss in
(3) or the logistic loss in (4) by the margin-based hinge-loss ? = max{0, 1 ? y(w T x + b)},
which is used in the standard SVC.
P`
min
C i=1 ?i + 21 kwk2
w,b,?,?xi ,i=1,??? ,`
(6)
subject to
y wT (x + ?x ) + b ? 1 ? ? , ? ? 0, i = 1, ? ? ? , `,
i
i
i
i
i
k?xi k ? ?i , i = 1, ? ? ? , `.
Note that we introduced the standard Tikhonov regularization term 21 kwk22 as usually employed in SVMs. The effect is similar to a Gaussian prior in (3) and (4) with the Bayesian
MAP (maximum a posterior) estimator. One can regard (6) as a regularized instance of (2)
with a non-probabilistic SVM discriminative loss criterion.
Problems with corrupted inputs are more difficult than problems with no input uncertainty.
Even if there is a large margin separator for the original uncorrupted inputs, the observed
noisy data may become non-separable. By modifying the noisy input data as in (6), we
reconstruct an easier problem, for which we may find a good linear separator. Moreover,
by modeling noise in the input data, TSVC becomes less sensitive to data points that are
very uncertain since we can find a choice of ?xi such that xi +?xi is far from the decision
boundary and will not be a support vector. This is illustrated later in Figure 1 (right). TSVC
thus constructs classifiers by focusing on the more trust-worthy data that are less uncertain.
4
Geometric interpretation
Further investigation reveals an intuitive geometric interpretation for TSVC which allows
users to easily grasp the fundamentals of this new formulation. We first derive the following
? is obtained, the optimal ??
fact that when the optimal w
xi can be represented in terms
P of
? If w is fixed in problem (6), optimizing problem (6) is equivalent to minimizing
w.
?i
over ?xi . The following lemma characterizes the solution.
Lemma 1. For any given hyperplane (w, b), the solution ??
xi of problem (6) is ??
xi =
w
yi ?i kwk
, i = 1, ? ? ? , `.
Proof. Since the noise vector ?xi only affects
P ?i and does not have impact on other
slack variables ?j , j 6= i. The minimization of
?i can be decoupled into ` subproblems
of minimizing each ?i = max{0, 1 ? yi (wT (xi + ?xi ) + b)} = max{0, 1 ? yi (wT xi +
b) ? yi wT ?xi } over its corresponding ?xi . By the Cauchy-Schwarz inequality, we have
|yi wT ?xi | ? kwk ? k?xi k with equality if and only if ?xi = cw for some scalar c. Since
w
?xi is bounded by ?i , the optimal ??
xi = yi ?i kwk
and the minimal ??i = max{0, 1 ?
yi (wT xi + b) ? ?i kwk}.
w
Define Sw (X) = {xi + yi ?i kwk
, i = 1, ? ? ? , `}. Then Sw (X) is a set of points that are
obtained by shifting the original points labeled +1 along w and points labeled ?1 along
?w, respectively, to its individual uncertainty boundary. These shifted points are illustrated
in Figure 1(middle) as filled points.
? ?b) obtained by TSVC (5) separates Sw
Theorem 1. The optimal hyperplane (w,
? (X) with
? ?b) obtained by TSVC (6) separates
the maximal margin. The optimal hyperplane (w,
Sw
? (X) with the maximal soft margin.
Proof. 1. If there exists any w such that Sw (X) is linearly separable, we can solve
? ?b, ?x?i ) be optimal to problem
problem (5) to obtain the largest separation margin. Let (w,
(5). Note that solving problem (5) is equivalent to max ? subject to constraints y i (wT (xi +
w
w
Figure 1: The separating hyperplanes obtained (left) by standard SVC and (middle) by total
SVC (6). The margin can be magnified by taking into account uncertainties. Right: TSVC
solution is less sensitive to outliers with large noise.
1
?xi ) + b) ? ? and kwk = 1, so the optimal ? = kwk
? [8]. To have the greatest ?, we want
T
?
? (xi + ?xi ) + b) for all i?s over ?xi . Hence following similar arguments
to max yi (w
?
? T ?xi | ? kwkk?x
?
? and when ?x?i = yi ?i kw
in Lemma 1, we have |yi w
i k = ?i kwk
? , the
wk
?equal? sign holds.
2. If no w exists to separate Sw (X) or even when such a w exists, we may solve problem
?
(6) to achieve the best compromise between the training error and the margin size. Let w
?
w
be optimal to problem (6). By Lemma 1, the optimal ??
xi = yi ?i kwk
.
?
According to the above analysis, we can convert problems (5) and (6) to a problem in
variables w, b, ?, as opposed to optimizing over both (w, b, ?) and ?xi , i = 1, ? ? ? , `. For
example, the linearly non-separable problem (6) becomes
P`
min
C i=1 ?i + 12 kwk2
w,b,?
(7)
subject to yi wT xi + b + ?i kwk ? 1 ? ?i , ?i ? 0, i = 1, ? ? ? , `.
Solving problem (7) yields an optimal solution to problem (6), and problem (7) can be
interpreted as finding (w, b) to separate Sw (X) with the maximal soft margin. The similar
argument holds true for the linearly separable case.
5
Solving and kernelizing TSVC
TSVC problem (6) can be recast to a second-order cone program (SOCP) as usually done
in TLS or Robust LS methods [7, 4]. However, directly implementing this SOCP will be
computationally quite expensive. Moreover, the SOCP formulation involves a large amount
of redundant variables, so a typical SOCP solver will take much longer time to achieve an
optimal solution. We propose a simple iterative approach as follows based on alternating
optimization method [1].
Algorithm 1
Initialize ?xi = 0, repeat the following two steps until a termination criterion is met:
1. Fix ?xi , i = 1, ? ? ? , ` to the current value, solve problem (6) for w, b, and ?.
2. Fix w, b to the current value, solve problem (6) for ?xi , i = 1, ? ? ? , `, and ?.
The first step of Algorithm 1 solves no more than a standard SVM by treating xi + ?xi as
the training examples. Similar to how SVMs are usually optimized, we can solve the dual
? ?b. The second step of Algorithm 1 solves a problem which has
SVM formulation [8] for w,
been discussed in Lemma 1. No optimization solver is needed. The solution ?x i of the
second step has a closed form in terms of the fixed w.
5.1
TSVC with linear functions
When only linear functions are considered, an alternative exists to solve problem
(6) other
P
1
2
than Algorithm 1. As analyzed in [5, 3], Tikhonov
regularization
min
C
?
+
i
2 kwk
P
has an important equivalent formulation as min ?i , subject to kwk ? ? where ? is a
positive constant. It can be shown that if ? ? kw ? k where w? is the solution to problem
(6) with 21 kwk2 removed, then the solution for the constraint problem is identical to the solution of the Tikhonov regularization problem for an appropriately chosen C. Furthermore,
? = ?. Hence TSVC probat optimality, the constraint kwk ? ? is active, which means kwk
lem (7) can be converted to a simple SOCP with the constraint kwk ? ? or a quadratically
constrained quadratic program (QCQP) as follows if equivalently using kwk 2 ? ? 2 .
P`
min
i=1 ?i
w,b,?
(8)
subject to yi wT xi + b + ??i ? 1 ? ?i , ?i ? 0, i = 1, ? ? ? , `, kwk2 ? ? 2 .
This QCQP produces exactly the same solution as problem (6) but is much easier to implement than (6) since it contains much less variables. By duality analysis similarly adopted
in [3], problem (8) has a dual formulation in dual variables ? as follows
qP
P`
`
min
?
?i ?j yi yj xTi xj ? i=1 (1 ? ??i )?i
i,j=1
?
(9)
P`
0 ? ?i ? 1, i = 1, ? ? ? , `.
subject to
i=1 ?i yi = 0,
5.2
TSVC with kernels
By using a kernel function k, the input vector xi is mapped to ?(xi ) in a usually high
dimensional feature space. The uncertainty in the input data introduces uncertainties for
images ?(xi ) in the feature space. TSVC can be generalized to construct separating hyperplanes in the feature space using the images of input vectors and the mapped uncertainties.
One possible generalization of TSVC is to assume the images are still subject to an additive
noise and the uncertainty model in the feature space can be represented as k??(x i )k ? ?i .
Then following the similar analysis in Sections 4 and 5.1, we obtain a problem same as (8)
only with xi replaced by ?(xi ) and ?xi replaced by ??(xi ), which can be easily kernelized by solving its dual formulation (9) with inner products xTi xj replaced by k(xi , xj ).
It is more realistic, however, that we are only able to estimate uncertainties in the input
space as bounded spheres k?xi k ? ?i . When the uncertainty sphere is mapped to the
feature space, the mapped uncertainty region may correspond to an irregular shape in the
feature space, which brings difficulties to the optimization of TSVC. We thus propose an
approximation strategy for Algorithm 1 based on the first order Taylor expansion of k.
A kernel function k(x, z) takes two arguments x and z. When we fix one of the arguments,
for example z, k can be viewed as a function of the other argument x. The first order Taylor
expansion of k with respect to x is k(xi + ?x, ?) = k(xi , ?) + ?xT k 0 (xi , ?) where k 0 (xi , ?)
is the gradient of k with respect to x at point xi .
Solving the dual P
SVM formulation in step 1 of Algorithm 1 with ?xj fixed
xj yields
P to ??
? = j yj ?
a solution (w
? j ?(xj + ??
xj ), ?b) and thus a predictor f (x) = j yj ?
? j k(x, xj +
P
? ?b) and minimize
??
xj ) + ?b. In step 2, we set (w, b) to (w,
?i over ?x
i , which as we
P
discussed in Lemma 1, amounts to minimizing each ?i = max{0, 1 ? yi ( j yj ?
? j k(xi +
?xi , xj + ??
xj ) + b)} over ?xi . Applying the Taylor expansion yields
P
yi
y
?
?
k(x
+
?x
,
x
+
??
x
)
+
b
j j
i
i
j
j
P j
P
= yi
? j k(xi , xj + ??
xj ) + b + yi ?xTi
? j k 0 (xi , xj + ??
xj ).
j yj ?
j yj ?
Table 1: Average test error percentages of TSVC and standard SVC algorithms on synthetic
problems (left and middle ) and digits classification problems (right).
Synthetic linear target
20 30 50 100
8.9 7.8 5.5 2.9
6.1 5.2 3.8 2.1
`
SVC
TSVC
Synthetic quadratic target
20 30 50 100 150
9.9 7.5 6.7 3.2 2.8
7.9 6.1 4.4 2.8 2.4
150
2.1
1.6
Digits
100
24.35
23.00
500
18.91
16.10
P
yj ?
? j k 0 (xi , xj +??
xj ) by the Cauchy-Schwarz
The optimal ?xi = yi ?i kvvii k where vi =
inequality. A closed-form approximate solution for the second step is thus acquired.
6
Experiments
Two sets of simulations were performed, one on synthetic datasets and one on NIST handwritten digits, to validate the proposed TSVC algorithm. We used the commercial optimization package ILOG CPLEX 9.0 to solve problems (8), (9) and the standard SVC dual
problem that is part of Algorithm 1.
In the experiments with synthetic data in 2 dimensions, we generated ` (=20, 30, 50, 100,
150) training examples xi from the uniform distribution on [?5, 5]2 . Two binary classification problems were created with target separating functions as x1 ?x2 = 0 and x21 +x22 = 9,
respectively. We used TSVC with linear functions for the first problem and TSVC with the
quadratic kernel (xTi xj )2 for the second problem. The input vectors xi were contaminated
by Gaussian noise with mean [0,0] and covariance matrix ? = ?i I where ?i was randomly
chosen from [0.1, 0.8]. The matrix I denotes the 2 ? 2 identity matrix. To produce an
outlier effect, we randomly chose 0.1` examples from the first 0.2` examples after examples were ordered in an ascending order of their distances to the target boundary. For these
0.1` examples, noise was generated using a larger ? randomly drawn from [0.5, 2]. Models
obtained by the standard SVC and TSVC were tested on a test set of 10000 examples that
were generated from the same distribution and target functions but without contamination.
We performed 50 trials for each experimental setting. The misclassification error rates averaged over the 50 trials are reported in Table 1. TSVC performed overall better than SVC.
Two representative modeling results of ` = 50 are also visually depicted in Figure 2.
5
5
4
4
3
3
2
2
1
1
0
0
?1
?1
?2
?2
?3
?3
?4
?4
?5
?5
?4
?3
?2
?1
0
1
2
3
4
5
?5
0
5
Figure 2: Results obtained by TSVC (solid lines) and standard SVC (dash lines) for the
problem with (left) a linear target function and the problem with (right) a quadratic target
function. The true target functions are illustrated using dash-dot lines.
The NIST database of handwritten digits does not contain any uncertainty information
originally. We created uncertainties by image distortions. Different types of distortions
can present in real-life data. We simulated it only by rotating images. We used ` (=100,
500) digits from the beginning of the database in training and 2000 digits from the end
of the database in test. We discriminated between odd numbers and even numbers. The
angle of rotation for each digit was randomly chosen from [?8o , 8o ]. The uncertainty
upper bounds ?i can be regarded as tuning parameters. We simply set all ?i = ?. The data
was preprocessed in the following way: training examples were centered to have mean 0
and scaled to have standard deviation 1. The test data was preprocessed using the mean
and standard deviation of training examples. We performed 50 trials with TSVC and SVC
using the linear kernel, which means we need to solve problem (9). Results are reported
in Table 1 and the tuned parameter ? was 1.38 for ` = 100 and 1.43 for ` = 500. We
conjecture that TSVC performance can be further improved if we obtain an estimate of ? i .
7
Discussions
We investigated a new learning model in which the observed input is corrupted with noise.
Based on a probability modeling approach, we derived a general statistical formulation
where unobserved input is modeled as a hidden mixture component. Under this framework,
we were able to develop estimation methods that take input uncertainty into consideration.
Motivated by this probability modeling approach, we proposed a new SVM classification
formulation that handles input uncertainty. This formulation has an intuitive geometric
interpretation. Moreover, we presented simple numerical algorithms which can be used to
solve the resulting formulation efficiently. Two empirical examples, one artificial and one
with real data, were used to illustrate that the new method is superior to the standard SVM
for problems with noisy input data. A related approach, with a different focus, is presented
in [2]. Our work attempts to recover the original classifier from the corrupted training data,
and hence we evaluated the performance on clean test data. In our statistical modeling
framework, rigorously speaking, the input uncertainty of test-data should be handled by a
mixture model (or a voted classifier under the noisy input distribution). The formulation
in [2] was designed to separate the training data under the worst input noise configuration
instead of the most likely configuration in our case. The purpose is to directly handle test
input uncertainty with a single linear classifier under the worst possible error setting. The
relationship and advantages of these different approaches require further investigation.
References
[1] J. Bezdek and R. Hathaway. Convergence of alternating optimization. Neural, Parallel Sci.
Comput., 11:351?368, 2003.
[2] C. Bhattacharyya, K.S. Pannagadatta, and A. J. Smola. A second order cone programming formulation for classifying missing data. In NIPS, Vol 17, 2005.
[3] J. Bi and V. N. Vapnik. Learning with rigorous support vector machines. In M. Warmuth and
B. Sch?olkopf, editors, Proceedings of the 16th Annual Conference on Learning Theory, pages
35?42, Menlo Park, CA, 2003. AAAI Press.
[4] L. El Ghaoui and H. Lebret. Robust solutions to least-squares problems with uncertain data.
SIAM Journal on Matrix Analysis and Applications, 18:1035?1064, 1997.
[5] G. H. Golub, P. C. Hansen, and D. P. O?Leary. Tikhonov regularization and total least squares.
SIAM Journal on Numerical Analysis, 30:185?194, 1999.
[6] G. H. Golub and C. F. Van Loan. An analysis of the total least squares problem. SIAM Journal
on Numerical Analysis, 17:883?893, 1980.
[7] S. Van Huffel and J. Vandewalle. The Total Least Squares Problem: Computational Aspects and
Analysis, in Frontiers in Applied Mathematics 9. SIAM Press, Philadelphia, PA, 1991.
[8] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, Inc., New York, 1998.
| 2743 |@word trial:3 middle:3 termination:1 simulation:1 seek:1 decomposition:1 covariance:1 pick:1 solid:1 configuration:3 contains:1 tuned:1 bhattacharyya:1 current:2 jinbo:2 com:2 written:1 john:1 additive:2 numerical:4 realistic:1 shape:2 treating:1 designed:1 generative:1 warmuth:1 beginning:1 hyperplanes:3 zhang:1 height:1 along:2 constructed:1 become:1 introduce:1 acquired:1 relying:1 xti:4 preclude:1 solver:2 becomes:4 underlying:3 moreover:5 bounded:6 what:1 interpreted:1 unobserved:2 magnified:1 finding:1 tackle:1 exactly:2 k2:4 classifier:5 scaled:1 medical:1 positive:1 engineering:2 chose:1 bi:3 averaged:1 testing:1 yj:7 practice:3 implement:1 digit:7 empirical:2 confidence:1 integrating:1 context:1 applying:1 equivalent:3 map:2 deterministic:1 center:1 missing:1 l:1 insight:1 estimator:1 regarded:2 dominate:1 handle:2 target:8 commercial:1 user:1 exact:1 programming:1 pa:2 element:1 recognition:2 expensive:1 predicts:1 labeled:2 database:3 observed:6 worst:2 region:1 removed:1 contamination:1 principled:1 rigorously:1 solving:5 compromise:1 easily:2 joint:2 represented:2 artificial:1 quite:1 heuristic:1 widely:1 solve:11 plausible:1 larger:1 distortion:2 reconstruct:1 noisy:7 advantage:1 propose:3 product:2 maximal:3 achieve:2 intuitive:5 validate:1 olkopf:1 convergence:1 produce:2 generating:1 derive:2 develop:2 illustrate:1 x0i:44 odd:1 solves:3 involves:1 met:1 closely:1 modifying:1 centered:1 routing:1 implementing:1 require:1 fix:3 generalization:1 investigation:2 summation:1 frontier:1 hold:2 therapy:1 considered:3 dx0i:3 exp:4 visually:1 purpose:1 estimation:2 hansen:1 sensitive:2 schwarz:2 largest:1 minimization:1 always:1 gaussian:6 derived:3 focus:3 likelihood:2 rigorous:2 el:1 typically:1 hidden:2 kernelized:1 issue:1 classification:12 arg:2 dual:6 overall:1 constrained:1 integration:1 initialize:1 equal:1 construct:3 sampling:1 identical:2 kw:2 park:1 contaminated:5 bezdek:1 employ:1 randomly:4 individual:1 replaced:5 phase:1 cplex:1 attempt:1 possibility:1 grasp:1 golub:2 introduces:1 mixture:5 analyzed:1 x22:1 accurate:1 decoupled:1 filled:1 taylor:3 rotating:1 minimal:1 uncertain:8 instance:1 modeling:12 soft:2 disadvantage:1 introducing:1 deviation:3 predictor:4 uniform:2 vandewalle:1 reported:2 corrupted:7 kxi:4 synthetic:5 fundamental:1 siam:4 probabilistic:1 leary:1 squared:1 aaai:1 opposed:1 possibly:1 account:1 converted:1 socp:5 wk:1 inc:2 vi:1 later:1 view:1 performed:4 closed:2 kwk:17 sup:1 characterizes:1 recover:1 complicated:1 parallel:1 minimize:1 formed:1 voted:1 square:12 variance:1 efficiently:2 yield:3 correspond:1 generalize:1 bayesian:1 handwritten:2 accurately:1 straight:1 proof:2 mi:1 gain:1 newly:1 focusing:1 originally:1 supervised:1 follow:1 improved:1 formulation:30 done:1 evaluated:1 furthermore:1 smola:1 until:1 hand:1 trust:1 logistic:3 brings:1 effect:7 contain:1 true:4 hence:6 regularization:4 equality:1 alternating:2 i2:5 illustrated:3 attractive:1 yorktown:1 criterion:2 generalized:1 reasoning:2 image:8 consideration:1 novel:1 svc:10 superior:2 rotation:1 qp:1 discriminated:1 extend:1 interpretation:6 discussed:3 kwk2:4 tuning:1 mathematics:1 similarly:1 reliability:1 dot:1 longer:1 posterior:1 optimizing:2 tikhonov:4 certain:1 inequality:2 binary:3 watson:2 life:2 yi:52 uncorrupted:2 employed:1 redundant:1 sphere:2 impact:1 prediction:2 regression:2 kernel:5 confined:1 penalize:1 irregular:1 want:1 appropriately:1 sch:1 rest:1 unlike:1 subject:9 kwk22:1 call:2 affect:1 xj:18 inner:1 knowing:1 motivated:2 handled:1 penalty:1 speech:2 speaking:1 cause:1 york:1 ignored:3 useful:2 tsvc:26 amount:2 ellipsoidal:1 svms:3 percentage:1 shifted:1 sign:1 estimated:2 correctly:1 diagnosis:1 shall:1 vol:1 group:1 drawn:1 preprocessed:2 clean:1 convert:1 cone:2 package:1 angle:1 uncertainty:27 tzhang:1 separation:1 decision:1 prefer:1 investigates:1 bound:2 dash:2 quadratic:4 annual:1 constraint:5 x2:1 flat:1 qcqp:2 aspect:1 argument:5 min:9 optimality:1 separable:7 conjecture:1 according:3 son:1 lem:1 intuitively:1 outlier:2 ghaoui:1 ln:4 computationally:2 slack:2 needed:1 know:1 tractable:1 instrument:1 ascending:1 end:1 adopted:2 kernelizing:1 alternative:2 original:4 denotes:1 x21:1 hinge:1 sw:7 quantity:1 strategy:1 traditional:2 gradient:1 cw:1 separate:5 remedied:1 separating:4 mapped:4 distance:1 simulated:1 sci:1 argue:1 cauchy:2 reason:1 assuming:1 modeled:1 relationship:1 minimizing:3 equivalently:1 difficult:2 subproblems:1 unknown:4 upper:1 observation:2 ilog:1 datasets:1 nist:2 worthy:1 introduced:1 specified:2 sentence:1 optimized:1 quadratically:1 nip:1 lebret:1 able:3 pannagadatta:1 usually:5 program:2 recast:1 max:10 shifting:1 unrealistic:1 greatest:1 misclassification:1 difficulty:1 rely:1 regularized:1 created:2 philadelphia:1 text:2 prior:2 geometric:6 literature:1 loss:4 generation:1 editor:1 classifying:2 ibm:2 repeat:1 bias:1 taking:1 van:2 regard:3 boundary:3 dimension:1 forward:1 far:1 approximate:1 preferred:1 dealing:1 active:1 reveals:1 xi:104 discriminative:4 iterative:1 table:3 robust:2 ca:1 ignoring:1 menlo:1 expansion:3 investigated:1 necessarily:1 separator:2 linearly:4 motivation:1 noise:26 x1:1 malvern:1 representative:1 tl:3 ny:1 tong:1 wiley:1 comput:1 theorem:1 xt:1 svm:7 exists:4 vapnik:2 margin:8 easier:3 depicted:1 simply:3 likely:1 conveniently:1 ordered:1 hathaway:1 scalar:1 corresponds:1 conditional:3 viewed:1 identity:1 replace:1 aided:1 included:1 typical:1 loan:1 wt:10 hyperplane:3 lemma:6 called:1 total:12 duality:1 svd:1 experimental:1 siemens:2 kwkk:1 support:10 tested:1 |
1,920 | 2,744 | Synchronization of neural networks by mutual
learning and its application to cryptography
Einat Klein
Department of Physics
Bar-Ilan University
Ramat-Gan, 52900 Israel
Rachel Mislovaty
Department of Physics
Bar-Ilan University
Ramat-Gan, 52900 Israel
Andreas Ruttor
Institut f?ur Theoretische Physik,
Universit?at W?urzbur
Am Hubland 97074 W?urzburg, Germany
Ido Kanter
Department of Physics
Bar-Ilan University
Ramat-Gan, 52900 Israel
Wolfgang Kinzel
Institut f?ur Theoretische Physik,
Universit?at W?urzbur
Am Hubland 97074 W?urzburg, Germany
Abstract
Two neural networks that are trained on their mutual output synchronize
to an identical time dependant weight vector. This novel phenomenon
can be used for creation of a secure cryptographic secret-key using a
public channel. Several models for this cryptographic system have been
suggested, and have been tested for their security under different sophisticated attack strategies. The most promising models are networks that
involve chaos synchronization. The synchronization process of mutual
learning is described analytically using statistical physics methods.
1 Introduction
Neural networks learn from examples. This concept has extensively been investigated using
models and methods of statistical mechanics [1, 2]. A ?teacher? network is presenting
input/output pairs of high dimensional data, and a ?student? network is being trained on
these data. Training means, that synaptic weights adapt by simple rules to the i/o pairs.
When the networks ? teacher as well as student ? have N weights, the training process
needs of the order of N examples to obtain generalization abilities. This means, that after
the training phase the student has achieved some overlap to the teacher, their weight vectors
are correlated. As a consequence, the student can classify an input pattern which does not
belong to the training set. The average classification error decreases with the number of
training examples.
Training can be performed in two different modes: Batch and on-line training. In the first
case all examples are stored and used to minimize the total training error. In the second
case only one new example is used per time step and then destroyed. Therefore on-line
training may be considered as a dynamic process: at each time step the teacher creates
a new example which the student uses to change its weights by a tiny amount. In fact,
for random input vectors and in the limit N ? ?, learning and generalization can be
described by ordinary differential equations for a few order parameters [3].
x
w
w
?
?
Figure 1: Two perceptrons receive an identical input x and learn their mutual output bits ?.
On-line training is a dynamic process where the examples are generated by a static network
- the teacher. The student tries to move towards the teacher. However, the student network
itself can generate examples on which it is trained. What happens if two neural networks
learn from each other? In the following section an analytic solution is presented [6], which
shows a novel phenomenon: synchronization by mutual learning. The biological consequences of this phenomenon are not explored, yet, but we found an interesting application
in cryptography: secure generation of a secret key over a public channel.
In the field of cryptography, one is interested in methods to transmit secret messages between two partners A and B. An attacker E who is able to listen to the communication
should not be able to recover the secret message.
In 1976, Diffie and Hellmann found a method based on number theory for creating a secret
key over a public channel accessible to any attacker[7]. Here we show how neural networks
can produce a common secret key by exchanging bits over a public channel and by learning
from each other.
2 Mutual Learning
We start by presenting the process of mutual learning for a simple network: Two perceptrons receive a common random input vector x and change their weights w according to
their mutual bit ?, as sketched in Fig. 1. The output bit ? of a single perceptron is given by
the equation
(1)
? = sign(w ? x)
x is an N -dimensional input vector with components which are drawn from a Gaussian with
mean 0 and variance 1. w is a N -dimensional weight vector with continuous components
which are normalized,
w?w =1
(2)
A/B
The initial state is a random choice of the components wi , i = 1, ...N for the two weight
vectors wA and wB . At each training step a common random input vector is presented to
the two networks which generate two output bits ? A and ? B according to (1). Now the
weight vectors are updated by the perceptron learning rule [3]:
?
wA (t + 1) = wA (t) + x? B ?(?? A ? B )
N
?
wB (t + 1) = wB (t) + x? A ?(?? A ? B )
(3)
N
?(x) is the step function. Hence, only if the two perceptrons disagree a training step is
performed with a learning rate ?. After each step (3), the two weight vectors have to be
normalized. In the limit N ? ?, the overlap
R(t) = wA (t) ? wB (t)
(4)
1
cos(?)
0.5
0
theory
simulation
?0.5
?1
0
0.5
1
?
cos(?)c
1.5
?c
2
Figure 2: Final overlap R between two perceptrons as a function of learning rate ?. Above
a critical rate ?c the time dependent networks are synchronized. From Ref. [6]
has been calculated analytically [6]. The number of training steps t is scaled as ? = t/N ,
and R(?) follows the equation
dR
= (R + 1)
d?
?r
2
?
?(1 ? R) ? ? 2
?
?
!
(5)
where ? is the angle between the two weight vectors wA and wB , i.e. R = cos ?. This
equation has fixed points R = 1, R = ?1, and
?
1 ? cos ?
? =
?
2?
(6)
Fig. 2 shows the attractive fixed point of (5) as a function of the learning rate ?. For small
values of ? the two networks relax to a state of a mutual agreement, R ? 1 for ? ? 0.
With increasing learning rate ? the angle between the two weight vectors increases up to
? = 133? for
? ? ?c ?
(7)
= 1.816
Above the critical rate ?c the networks relax to a state of complete disagreement, ? =
180? , R = ?1. The two weight vectors are antiparallel to each other, wA = ?wB .
As a consequence, the analytic solution shows, well supported by numerical simulations
for N = 100, that two neural networks can synchronize to each other by mutual learning.
Both networks are trained to the examples generated by their partner and finally obtain an
antiparallel alignment. Even after synchronization the networks keep moving, the motion
is a kind of random walk on an N-dimensional hypersphere producing a rather complex bit
sequence of output bits ? A = ?? B [8].
3 Random walk in weight space
We want to apply synchronization of neural networks to cryptography. In the previous section we have seen that the weight vectors of two perceptrons learning from each other can
synchronize. The new idea is to use the common weights wA = ?wB as a key for encryption [11]. But two issues have to be solved yet: (i) Can an external observer, recording
the exchange of bits, calculate the final wA (t) ? The essence of using mutual learning as
an encryption tool is the fact that while the parties preform a mutual process in which they
react towards one another, the attacker preforms a learning process, in which the ?teacher?
does not react towards him. (ii) Does this phenomenon exist for discrete weights? Since
communication is usually based on bit sequences, this is an important practical issue. Both
issues are discussed below.
Synchronization occurs for normalized weights, unnormalized ones do not synchronize [6].
Therefore, for discrete weights, we introduce a restriction in the space of possible vectors
A/B
and limit the components wi
to 2L + 1 different values,
A/B
wi
? {?L, ?L + 1, ..., L ? 1, L}
(8)
In order to obtain synchronization to a parallel ? instead of an antiparallel ? state wA = wB ,
we modify the learning rule (3) to:
wA (t + 1) = wA (t) ? x? A ?(? A ? B )
wB (t + 1) = wB (t) ? x? B ?(? A ? B )
(9)
Now the components of the random input vector x are binary xi ? {+1, ?1}. If the two
networks produce an identical output bit ? A = ? B , then their weights move one step in
the direction of ?xi ? A . But the weights should remain in the interval (8), therefore if any
component moves out of this interval, |wi | = L+1, it is set back to the boundary wi = ?L.
Each component of the weight vectors performs a kind of random walk with reflecting
boundary. Two corresponding components wiA and wiB receive the same random number
?1. After each hit at the boundary the distance |wiA ? wiB | is reduced until it has reached
zero. For two perceptrons with a N -dimensional weight space we have two ensembles of
N random walks on the interval {?L, ..., L}. We expect that after some characteristic time
scale ? = O(L2 ) the probability of two random walks being in different states decreases as
P (t) ? P (0)e?t/? . Hence the total synchronization time should be given by N ? P (t) '
1 which gives tsync ? ? ln N . In fact, our simulations show the synchronization time
increases logarithmically with N .
4 Mutual Learning in the Tree Parity Machine
A single perceptron transmits too much information. An attacker, who knows the set of
input/output pairs, can derive the weights of the two partners. On one hand, the information
should be hidden so that the attacker does not calculate the weights, but on the other hand
enough information should be transmitted so that the two partners can synchronize. We
found that multilayer networks with hidden units may be candidates for such a task [11].
More precisely, we consider a Tree Parity Machine(TPM), with three hidden units as shown
in Fig. 3.
2
11
1 2 ...
12
N
1 2 ...
13
N
1 2 ...
N
Figure 3: A tree parity machine with K = 3
Each hidden unit is a perceptron (1) with discrete weights (8). The output bit ? of the total
network is the product of the three bits of the hidden units
? A = ?1A ?2A ?3A
? B = ?1B ?2B ?3B
(10)
At each training step the two machines A and B receive identical input vectors x1 , x2 , x3 .
The training algorithm is the following: Only if the two output bits are identical, ? A = ? B ,
the weights can be changed. In this case, only the hidden unit ?i which is identical to ?
changes its weights using the Hebbian rule
A
A
wA
(11)
i (t + 1) = w i (t) ? xi ?
The partner as well as any attacker does not know which one of the K weight vectors is
updated. The partners A and B react to their mutual output and move signals ? A and ? B ,
whereas an attacker can only receive these signals but not influence the partners with its
own output bit. This is the essential mechanism which allows synchronization but prohibits learning. Nevertheless, advanced attackers use different heuristics to accelerate their
synchronization, as described in the next section.
5 Attackers
The following are possible attack strategies, which were suggested by Shamir et al.[12]:
The Genetic Attack, in which a large population of attackers is trained, and every new
time step each attacker is multiplied to cover the 2K?1 possible internal representations of
{?i } for the current output ? . As dynamics proceeds successful attackers stay while the
unsuccessful are removed. The Probabilistic Attack, in which the attacker tries to follow
the probability of every weight element by calculating the distribution of the local field of
every input and using the output, which is publicly known. The Naive Attacker, in which
the attacker imitates one of the parties.
More successful is the Flipping Attack strategy, in which the attacker imitates one of the
parties, but in steps in which his output disagrees with the imitated party?s output, he
negates (?flips?) the sign of one of his hidden units. The unit most likely to be wrong
is the one with the minimal absolute value of the local field, therefore that is the unit which
is flipped.
While the synchronization time increases with L2 [15], the probability of finding a successful flipping-attacker decreases exponentially with L,
P ? e?yL
as seen in Figure 4. Therefore, for large L values the system is secure[15]. Every time step,
the parties either appraoch each other (?attractive step? or drift apart (?repulsive step?).
Close to synchronization the probability for a repulsive step in the mutual learning between
2
A and B scales like (?) , while in the dynamic
? learning?between the naive attacker C and
A it scales like ?, where we define ? = P rob ?iC 6= ?iA [18].
It has been shown that among a group of Ising vector students which perform learning, and
have an overlap R with the teacher, the best student is the center of?mass vector (which was
shown to be an Ising vector as well) which has an overlap Rcm ? R , for R ? [0 : 1][19].
Therefore letting a group of attackers cooperate throughout the process may be to their
advantage. The most successful attack strategy, the ?Majority Flipping Attacker? uses a
group of attackers as a cooperating group rather than as individuals. When updating the
weights, instead of each attacker being updated according to its own result, all are updated
according to the majority?s result. This ?team-work? approach improves the attacker?s
performance. When using the majority scheme, the probability for a successful attacker
seems to approach a constant value ? 0.5 independent of L.
0
2
4
6
8
10
12
1
0.1
P
0.01
Flipping attack
Majority-Flipping attack
P = 1.55 exp( -0.4335 L )
0.001
L
Figure 4: The attacker?s success probability P as a function of L, for the flipping attack and
the majority-flipping attack, with N=1000, M=100, averaged over 1000 samples. To avoid
fluctuations, we define the attacker successful if he found out 98% of the weights
6 Analytical description
The semi-analytical description of this process gives us further insight to the synchronization process of mutual and dynamic learning. The study of discrete networks requires different methods of analysis than those used for the continuous case. We found that instead
of examining the evolution of R and Q, we must examine (2L + 1) ? (2L + 1) parameters,
which describe the mutual learning process. By writing a Markovian process that describes
the development of these parameters, one gains an insight into the learning procedure. Thus
we define a (2L + 1) ? (2L + 1) matrix, F? , in which the state of the machines in the time
step ? is represented. The elements of F, are fqr , where q, r = ?L, ... ? 1, 0, 1, ...L.
The element fqr represents the fraction of components in a weight vector in which the A?s
components are equal to q and the matching components in d unit B are equal to r. Hence,
the overlap between the two units as well as their norms are defined through this matrix,
R=
L
X
q,r=?L
qrfqr ,
QA =
L
X
q=?L
q 2 fqr QB =
L
X
r2 fqr
(12)
r=?L
The updating of matrix elements is described as follows: for the elements with q and r
which are not on the boundary, (q 6= ?L and r 6= ?L) the update can be written in a
simple manner,
?
?
1
1
+
fq+1,r?1 + fq?1,r+1 .
(13)
fq,r = ? (p? ? ?) fq,r + ? (? ? p? )
2
2
Our results indicate that the order parameters are not self-averaged quantities [16]. Several
runs with the same N , results in different curves for the order parameters as a function of
the number of steps, see Figure 5. This explains the non-zero variance of ? as a results of
the fluctuations in the local fields induced by the input even in the thermodynamic limit.
7 Combining neural networks and chaos synchronization
Two chaotic system starting from different initial conditions can be synchronized by different kinds of couplings between them. This chaotic synchronization can been used in neural
0
0
-0.2
-0.2
-0.4
<?>
-0.4
-0.6
<?>
-0.6
-0.8
-1
-0.8
-1
0
10
5
15
20
# steps
0
20
40
60
80
100
# steps
Figure 5: The averaged overlap h?i and its standard deviation as a function of the number
of steps as found from the analytical results (solid line) and simulation results (circles)
of mutual learning in TPMs. Inset: analytical results (solid line) and simulation results
(circles) results for the perceptron, with L = 1 and N = 104 .
cryptography to enhance the cryptographic systems and to improve their security. A model
which combines a TPM and logistic maps and is hereby presented, was shown to be more
secure than the TPM discussed above. Other models which use mutual synchronization of
networks whose dynamics are those of the Lorenz system are now under research and seem
very promising.
In the following system we combine neural networks with logistic maps: Both partners A
and B use their neural networks as input for the logistic maps which generate the output
bits to be learned. By mutually learning these bits, the two neural networks approach each
other and produce an identical signal to the chaotic maps which ? in turn ? synchronize as
well, therefore accelerating the synchronization of the neural nets.
Previously, the output bit of each hidden unit was the sign of the local field[11]. Now we
combine the PM with chaotic synchronization by feeding the local fields into logistic maps:
sk (t + 1) = ?(1 ? ?)sk (t)(1 ? sk (t)) +
??
hk (t)
2
(14)
? denotes a transformed local field which is shifted and normalized to fit into the
Here h
interval [0, 2]. For ? = 0 one has the usual quadratic iteration which produces K chaotic
series sk (t) when the parameter ? is chosen correspondingly; here we use ? = 3.95. For
0 < ? < 1 the logistic maps are coupled to the fields of the hidden units. It has been
shown that such a coupling leads to chaotic synchronization[17]: If two identical maps
with different initial conditions are coupled to a common external signal they synchronize
when the coupling strength is large enough, ? > ?c .
The security of key generation increases as the system approaches the critical point of
chaotic synchronization. The probability of a successful attack decreases like exp(?yL)
and it is possible that the exponent y diverges as the coupling constant between the neural
nets and the chaotic maps is tuned to be critical.
8 Conclusions
A new phenomenon has been observed: Synchronization by mutual learning. If the learning
rate ? is large enough, and if the weight vectors keep normalized, then the two networks
relax to a parallel orientation. Their weight vectors still move like a random walk on a
hypersphere, but each network has complete knowledge about its partner.
It has been shown how this phenomenon can be used for cryptography. The two partners
can create a common secret key over a public channel. The fact that the parties are learning
mutually, gives them an advantage over the attacker who is learning one-way. In contrast
to number theoretical methods the networks are very fast; essentially they are linear filters,
the complexity to generate a key of length N scales with N (for sequential update of the
weights).
Yet sophisticated attackers which use ensembles of cooperating attackers have a good
chance to synchronize. However, advanced algorithms for synchronization, which involve
different types of chaotic synchronization seem to be more secure. Such models are subjects of active research, and only the future will tell whether the security of neural network
cryptography can compete with number theoretical methods.
References
[1] J. Hertz, A. Krogh, and R. G. Palmer: Introduction to the Theory of Neural Computation, (Addison Wesley, Redwood City, 1991)
[2] A. Engel, and C. Van den Broeck: Statistical Mechanics of Learning, (Cambridge
University Press, 2001)
[3] M. Biehl and N. Caticha: Statistical Mechanics of On-line Learning and Generalization, The Handbook of Brain Theory and Neural Networks, ed. by M. A. Arbib (MIT
Press, Berlin 2001)
[4] E. Eisenstein, I. Kanter, D.A. Kessler and W. Kinzel, Phys. Rev. Lett. 74, 6-9 (1995)
[5] I. Kanter, D.A. Kessler, A. Priel and E. Eisenstein, Phys. Rev. Lett. 75, 2614-2617
(1995);L. Ein-Dor and I. Kanter, Phys. Rev. E 57, 6564 (1998);M. Schr?oder and W.
Kinzel, J. Phys. A 31, 9131-9147 (1998); A. Priel and I. Kanter, Europhys. Lett.(2000)
[6] R. Metzler and W. Kinzel and I. Kanter, Phys. Rev. E 62, 2555 (2000)
[7] D. R. Stinson, Cryptography: Theory and Practice (CRC Press 1995)
[8] R. Metzler, W. Kinzel, L. Ein-Dor and I. Kanter, Phys. Rev. E 63, 056126 (2001)
[9] M. Rosen-Zvi, I. Kanter and W. Kinzel, cond-mat/0202350 (2002)
[10] R. Urbanczik, private communication
[11] I. Kanter, W. Kinzel and E. Kanter, Europhys. Lett., 57, 141 (2002).
[12] A.Klimov, A. Mityagin, A. Shamir, ASIACRYPT 2002 : 288-298.
[13] W. Kinzel, R. Metzler and I. Kanter, J. Phys. A. 33 L141 (2000).
[14] W. Kinzel, Contribution to Networks, ed. by H. G. Schuster and S. Bornholdt, to be
published by Wiley VCH (2002).
[15] R Mislovaty, Y. Perchenok, I. Kanter and W. Kinzel, Phys. Rev. E 66, 066102 (2002).
[16] G. Reents and R. Urbanczik, Phys. Rev. Lett., 80, 5445 (1998).
[17] R. Mislovaty, E. Klein, I. Kanter and W. Kinzel, Phys. Rev. Lett. 91, 118701 (2003).
[18] M. Rosen-Zvi, E. Klein, I. Kanter and W. Kinzel, Phys. Rev. E 66 066135 (2002).
[19] M. Copelli, M. Boutin, C. Van Der Broeck and B. Van Rompaey, Europhys. Lett., 46,
139 (1999).
| 2744 |@word private:1 seems:1 norm:1 physik:2 simulation:5 solid:2 initial:3 series:1 genetic:1 tuned:1 current:1 yet:3 must:1 written:1 numerical:1 analytic:2 update:2 imitated:1 hypersphere:2 priel:2 attack:11 differential:1 combine:3 manner:1 introduce:1 secret:7 examine:1 mechanic:3 brain:1 increasing:1 kessler:2 mass:1 israel:3 what:1 kind:3 preform:1 prohibits:1 finding:1 every:4 universit:2 scaled:1 hit:1 wrong:1 unit:12 producing:1 local:6 modify:1 limit:4 consequence:3 fluctuation:2 co:4 ramat:3 palmer:1 averaged:3 practical:1 practice:1 x3:1 chaotic:9 procedure:1 urbanczik:2 matching:1 close:1 influence:1 writing:1 restriction:1 map:8 center:1 starting:1 react:3 rule:4 insight:2 his:2 population:1 transmit:1 updated:4 shamir:2 copelli:1 us:2 agreement:1 logarithmically:1 element:5 updating:2 metzler:3 ising:2 observed:1 solved:1 calculate:2 decrease:4 removed:1 complexity:1 dynamic:6 trained:5 creation:1 creates:1 accelerate:1 represented:1 fast:1 describe:1 tell:1 europhys:3 kanter:14 heuristic:1 whose:1 biehl:1 relax:3 ability:1 itself:1 final:2 sequence:2 advantage:2 analytical:4 net:2 product:1 rcm:1 combining:1 description:2 diverges:1 produce:4 encryption:2 derive:1 coupling:4 krogh:1 indicate:1 synchronized:2 direction:1 filter:1 public:5 explains:1 exchange:1 crc:1 feeding:1 generalization:3 biological:1 considered:1 ic:1 exp:2 appraoch:1 him:1 create:1 city:1 tool:1 engel:1 mit:1 gaussian:1 rather:2 avoid:1 fq:4 hk:1 secure:5 contrast:1 am:2 dependent:1 hidden:9 transformed:1 interested:1 germany:2 sketched:1 issue:3 classification:1 among:1 orientation:1 exponent:1 development:1 mutual:20 field:8 equal:2 identical:8 flipped:1 represents:1 future:1 rosen:2 few:1 individual:1 phase:1 dor:2 message:2 alignment:1 institut:2 tree:3 walk:6 preforms:1 circle:2 theoretical:2 minimal:1 classify:1 wb:10 markovian:1 cover:1 exchanging:1 ordinary:1 deviation:1 successful:7 examining:1 too:1 zvi:2 stored:1 teacher:8 ido:1 broeck:2 accessible:1 stay:1 probabilistic:1 physic:4 yl:2 enhance:1 dr:1 external:2 creating:1 ilan:3 student:9 performed:2 try:2 observer:1 wolfgang:1 reached:1 start:1 recover:1 parallel:2 bornholdt:1 contribution:1 minimize:1 publicly:1 variance:2 who:3 characteristic:1 ensemble:2 theoretische:2 antiparallel:3 published:1 phys:11 synaptic:1 ed:2 hereby:1 transmits:1 static:1 gain:1 knowledge:1 listen:1 improves:1 sophisticated:2 back:1 reflecting:1 wesley:1 follow:1 stinson:1 until:1 hand:2 dependant:1 mode:1 logistic:5 concept:1 normalized:5 evolution:1 analytically:2 hence:3 attractive:2 self:1 essence:1 eisenstein:2 unnormalized:1 presenting:2 complete:2 performs:1 motion:1 cooperate:1 hubland:2 novel:2 chaos:2 common:6 kinzel:12 exponentially:1 belong:1 discussed:2 he:2 cambridge:1 pm:1 moving:1 own:2 apart:1 binary:1 wib:2 success:1 der:1 seen:2 transmitted:1 signal:4 semi:1 ii:1 thermodynamic:1 hebbian:1 adapt:1 multilayer:1 essentially:1 iteration:1 achieved:1 receive:5 whereas:1 want:1 interval:4 recording:1 induced:1 subject:1 seem:2 enough:3 destroyed:1 fit:1 arbib:1 andreas:1 idea:1 whether:1 accelerating:1 oder:1 involve:2 amount:1 extensively:1 reduced:1 generate:4 exist:1 shifted:1 sign:3 per:1 klein:3 discrete:4 mat:1 group:4 key:8 negates:1 nevertheless:1 drawn:1 cooperating:2 fraction:1 tpm:3 angle:2 run:1 compete:1 rachel:1 throughout:1 bit:17 quadratic:1 strength:1 precisely:1 x2:1 qb:1 department:3 according:4 hertz:1 remain:1 describes:1 ur:2 wi:5 rob:1 rev:9 happens:1 den:1 fqr:4 ln:1 equation:4 mutually:2 previously:1 turn:1 mechanism:1 know:2 flip:1 letting:1 addison:1 repulsive:2 multiplied:1 apply:1 disagreement:1 batch:1 denotes:1 gan:3 calculating:1 move:5 quantity:1 occurs:1 flipping:7 strategy:4 usual:1 distance:1 berlin:1 majority:5 partner:10 length:1 cryptographic:3 attacker:29 perform:1 disagree:1 communication:3 team:1 schr:1 redwood:1 drift:1 pair:3 security:4 vch:1 learned:1 qa:1 able:2 bar:3 suggested:2 usually:1 pattern:1 below:1 proceeds:1 unsuccessful:1 ia:1 overlap:7 critical:4 synchronize:8 advanced:2 scheme:1 improve:1 naive:2 coupled:2 imitates:2 l2:2 disagrees:1 synchronization:25 expect:1 interesting:1 generation:2 tiny:1 changed:1 supported:1 parity:3 perceptron:5 correspondingly:1 absolute:1 van:3 boundary:4 calculated:1 curve:1 lett:7 caticha:1 party:6 ruttor:1 keep:2 active:1 handbook:1 hellmann:1 xi:3 continuous:2 sk:4 promising:2 channel:5 learn:3 wia:2 investigated:1 complex:1 cryptography:8 ref:1 x1:1 fig:3 ein:2 wiley:1 candidate:1 inset:1 explored:1 r2:1 essential:1 lorenz:1 sequential:1 likely:1 chance:1 towards:3 change:3 total:3 cond:1 perceptrons:6 internal:1 phenomenon:6 tested:1 schuster:1 correlated:1 |
1,921 | 2,745 | Beat Tracking the Graphical Model Way
Dustin Lang
Nando de Freitas
Department of Computer Science
University of British Columbia
Vancouver, BC
{dalang, nando}@cs.ubc.ca
Abstract
We present a graphical model for beat tracking in recorded music. Using
a probabilistic graphical model allows us to incorporate local information
and global smoothness constraints in a principled manner. We evaluate
our model on a set of varied and difficult examples, and achieve impressive results. By using a fast dual-tree algorithm for graphical model inference, our system runs in less time than the duration of the music being
processed.
1
Introduction
This paper describes our approach to the beat tracking problem. Dixon describes beats as
follows: ?much music has as its rhythmic basis a series of pulses, spaced approximately
equally in time, relative to which the timing of all musical events can be described. This
phenomenon is called the beat, and the individual pulses are also called beats?[1]. Given a
piece of recorded music (an MP3 file, for example), we wish to produce a set of beats that
correspond to the beats perceived by human listeners.
The set of beats of a song can be characterised by the trajectories through time of the tempo
and phase offset. Tempo is typically measured in beats per minute (BPM), and describes
the frequency of beats. The phase offset determines the time offset of the beat. When
tapping a foot in time to music, tempo is the rate of foot tapping and phase offset is the
time at which the tap occurs.
The beat tracking problem, in its general form, is quite difficult. Music is often ambiguous;
different human listeners can perceive the beat differently. There are often several beat
tracks that could be considered correct. Human perception of the beat is influenced both by
?local? and contextual information; the beat can continue through several seconds of silence
in the middle of a song.
We see the beat tracking problem as not only an interesting problem in its own right, but
as one aspect of the larger problem of machine analysis of music. Given beat tracks for
a number of songs, we could extract descriptions of the rhythm and use these features for
clustering or searching in music collections. We could also use the rhythm information to
do structural analysis of songs - for example, to find repeating sections. In addition, we
note that beat tracking produces a description of the time scale of a song; knowledge of the
tempo of a song would be one way to achieve time-invariance in a symbolic description.
Finally, we note that beat tracking tells us where the important parts of a song are; the
beats (and major divisions of the beats) are good sampling points for other music-analysis
problems such as note detection.
2
Related Work
Many researchers have investigated the beat tracking problem; we present only a brief
overview here. Scheirer [2] presents a system, based on psychoacoustical observations, in
which a bank of resonators compete to explain the processed audio input. The system is
tested on a difficult set of examples, and considerable success is reported. The most common problem is a lack of global consistency in the results - the system switches between
locally optimal solutions.
Goto [3] has described several systems for beat tracking. He takes a very pragmatic view
of the problem, and introduces a number of assumptions that allow good results in a limited
domain - pop music in 4/4 time with roughly constant tempo, where bass or snare drums
keep the beat according to drum patterns known a priori, or where chord changes occur at
particular times within the measure.
Cemgil and Kappen [4] phrase the beat tracking problem in probabilistic terms, and we
adapt their model as our local observation model. They use MIDI-like (event-based) input
rather than audio, so the results are not easily comparable to our system.
3
Graphical Model
In formulating our model for beat tracking, we assume that the tempo is nearly constant
over short periods of time, and usually varies smoothly. We expect the phase to be continuous. This allows us to use the simple graphical model shown in Figure 1. We break
the song intoPSfrag
a set replacements
of frames of two seconds; each frame is a node in the graphical model.
We expect the tempo to be constant within each frame, and the tempo and phase offset
parameters to vary smoothly between frames.
?
X1
?
Y1
?
X2
?
Y2
?
X3
?
Y3
XF
?
YF
Figure 1: Our graphical model for beat tracking. The hidden state X is composed of
the state variables tempo and phase offset. The observations Y are the features extracted
by our audio signal processing. The potential function ? describes the compatibility of
the observations with the state, while the potential function ? describes the smoothness
between neighbouring states.
In this undirected probabilistic graphical model, the potential function ? describes the compatibility of the state variables X = {T, P } composed of tempo T and phase offset P with
the local observations Y . The potential function ? describes the smoothness constraints
between frames. The observation Y comes from processing the audio signal, which is described in Section 5. The ? function comes from domain knowledge and is described in
Section 4. This model allows us to trade off local fit and global smoothness in a principled manner. By using an undirected model, we allow contextual information to flow both
forward and backward in time.
In such models, belief propagation (BP) [5] allows us to compute the marginal probabilities
of the state variables in each frame. Alternatively, maximum belief propagation (max-BP)
allows a joint maximum a posteriori (MAP) set of state variables to be determined. That
is, given a song, we generate the observations Yi , i = 1 . . . F , (where F is the number of
frames in the song) and seek a set of states Xi that maximize the joint product
F
FY
?1
1 Y
P (X, Y ) =
?(Yi , Xi )
?(Xi , Xi+1 ) .
Z i=1
i=1
Our smoothness function ? is the product of tempo and phase smoothness components ?T
and ?P . For the tempo component, we use a Gaussian on the log of tempo. For the phase
offset component, we want the phases to agree at a particular point in time: the boundary
between the two frames (nodes), tb . We find the phase ? of tb predicted by the parameters
in each frame, and place a Gaussian prior on the distance between points on the unit circle
with these phases:
?(X1 , X2 | tb ) = ?T (T1 , T2 ) ?P (T1 , P1 , T2 , P2 | tb )
= N (log T1 ? log T2 , ?T2 ) N ((cos ?1 ? cos ?2 , sin ?1 ? sin ?2 ), ?P2 )
where ?i = 2?Ti tb ? Pi and N (x, ? 2 ) is a zero-mean Gaussian with variance ? 2 . We set
?T = 0.1 and ?P = 0.1?. The qualitative results seem to be fairly stable as a function of
these smoothness parameters.
4
Domain Knowledge
In this section, we describe the derivation of our local potential function (also known as the
observation model) ?(Yi , Xi ).
Our model is an adaptation of the work of [4], which was developed for use with MIDI
input. Their model is designed so that it ?prefers simpler [musical] notations?. The beat
is divided into a fixed number of bins (some power of two), and each note is assigned to
the nearest bin. The probability of observing a note at a coarse subdivision of the beat is
greater than at a finer subdivision. More precisely, a note that is quantized to the bin at beat
number k has probability p(k) ? exp(?? d(k)), where d(k) is the number of digits in the
binary representation of the number k mod 1.
Since we use recorded music rather than MIDI, we must perform signal processing to
extract features from the raw data. This process produces a signal that has considerably
more uncertainty than the discrete events of MIDI data, so we adjust the model. We add
the constraint that features should be observed near some quantization point, which we
express by centering a Gaussian around each of the quantization points. The variance of
2
is in units of beats, so we arrive at the periodic template function b(t),
this Gaussian, ?Q
shown in Figure 2. We have set the number of bins to 8, ? to one, and ?Q = 0.025.
The template function b(t) expresses our belief about the distribution of musical events
within the beat. By shifting and scaling b(t), we can describe the expected distribution of
notes in time for different tempos and phase offsets:
P
.
b(t | T, P ) = b T t ?
2?
Our signal processing (described below) yields a discrete set of events that are meant to
correspond to musical events. Events occur at a particular time t and have a ?strength? or
?energy? E. Given a set of discrete events Y = {ti , Ei }, i = 1 . . . M , and state variables X = {T, P }, we take the probability that the events were drawn from the expected
distribution b(t | T, P ):
?(Y , X) = ?({t, E}, {T, P }) =
M
Y
i=1
b(ti | T, P )Ei
.
Note Probability
PSfrag replacements
0
1/8
1/4
3/8
1/2
5/8
3/4
7/8
1
Time (units of beats)
Figure 2: One period of our template function b(t), which gives the expected distribution of
notes within a beat. Given tempo and phase offset values, we stretch and shift this function
to get the expected distribution of notes in time.
This is a multinomial probability function in the continuous limit (as the bin size becomes
zero). Note that ? is a positive, unnormalized potential function.
5
Signal Processing
Our signal processing stage is meant to extract features that approximate musical events
(drum beats, piano notes, guitar strums, etc.) from the raw audio signal. As discussed
above, we produce a set of events composed of time and ?strength? values, where the
strength describes our certainty that an event occurred. We assume that musical events
are characterised by brief, rapid increases in energy in the audio signal. This is certainly
the case for percussive instruments such as drums and piano, and will often be the case for
string and woodwind instruments and for voices. This assumption breaks for sounds that
fade in smoothly rather than ?spikily?.
We begin by taking the short-time Fourier transform (STFT) of the signal: we slide a 50
millisecond Hann window over the signal in steps of 10 milliseconds, take the Fourier
transform, and extract the energy spectrum. Following a suggestion by [2], we pass the
energy spectrum through a bank of five filters that sum the energy in different portions
of the spectrum. We take the logarithm of the summed energies to get a ?loudness? signal.
Next, we convolve each of the five resulting energy signals with a filter that detects positivegoing edges. This can be considered a ?loudness gain? signal. Finally, we find the maxima
within 50 ms neighbourhoods. The result is a set of points that describe the energy gain
signal in each band, with emphasis on the maxima. These are the features Y that we use in
our local probability model ?.
6
Fast Inference
To find a maximum a posteriori (MAP) set of state variables that best explain a set of
observations, we need to optimize a 2F -dimensional, continuous, non-linear, non-Gaussian
function that has many local extrema. F is the number of frames in the song, so is on the
order of the length of the song in seconds - typically in the hundreds. This is clearly
difficult. We present two approximation strategies. In the first strategy, we convert the
continuous state space into a uniform discrete grid and run discrete belief propagation. In
the second strategy, we run a particle filter in the forward direction, then use the particles
as ?grid? points and run discrete belief propagation as per [6].
Since the landscape we are optimizing has many local maxima, we must use a fine discretization grid (for the first strategy) or a large number of particles (for the second strategy). The message-passing stage in discrete belief propagation takes O(N 2 ) if performed
naively, where N is the number of discretized states (or particles) per frame. We use a dualtree recursion strategy as proposed in [7] and extended to maximum a posteriori inference
in [8]. With this approach, the computation becomes feasible.
As an aside, we note that if we wish to compute the smoothed marginal probabilities rather
than the MAP set of parameters, then we can use standard discrete belief propagation or
particle smoothing. In both cases, the naive cost in O(N 2 ), but by using the Fast Gauss
Transform[9] the cost becomes O(N ). This is possible because our smoothness potential
? is a low-dimensional Gaussian.
For the results presented here, we discretize the state space into NT = 90 tempo values and
NP = 50 phase offset values for the belief propagation version. We distribute the tempo
values uniformly on a log scale between 40 and 150 BPM, and distribute the phase offsets
uniformly. For the particle filter version, we use NT ? NP = 4500 particles. With these
values, our Matlab and C implementation runs at faster than real time (the duration of the
song) on a standard desktop computer.
7
Results
A standard corpus of labelled ground truth data for the beat-tracking problem does not exist.
Therefore, we labelled a relatively small number of songs for evaluation of our algorithm,
by listening to the songs and pressing a key at each perceived beat. We sought out examples
that we thought would be difficult, and we attempted to avoid the methods of [10]. Ideally,
we would have several human listeners label each song, since this would help to capture
the ambiguity inherent in the problem. However, this would be quite time-consuming.
One can imagine several methods for speeding up the process of generating ground truth
labellings and of cleaning up the noisy results generated by humans. For example, a human
labelling of a short segment of the song could be automatically extrapolated to the remainder of the song, using energy spikes in the audio signal to fine-tune the placement of beats.
However, by generating ground truth using assumptions similar to those embodied in the
models we intend to test, we risk invalidating the results. We instead opted to use ?raw?
human-labelled songs.
There is no standard evaluation metric for beat tracking. We use the ? function presented
by Cemgil et al [11] and used by Dixon [1] in his analysis:
NS
X
(Si ? Tj )2
100
max exp ?
?(S, T ) =
(NS + NT )/2 i=1 j?T
2? 2
where S and T are the ground-truth and proposed beat times, and ? is set to 40 milliseconds.
A ? value near 100 means that each predicted beat in close to a true beat, while a value near
zero means that each predicted beat is far from a true beat.
We have focused on finding a globally-optimum beat track rather than precisely locating
each beat. We could likely improve the ? values of our results by fine-tuning each predicted
beat, for example by finding nearby energy peaks, though we have not done this in the
results presented here.
Table 1 shows a summary of our results. Note the wide range of genres and the choice of
songs with features that we thought would make beat tracking difficult. This includes all
our results (not just the ones that look good).
The first columns list the name of the song and the reason we included it. The third column
lists the qualitative performance of the fixed grid version: double means our algorithm
produced a beat track twice as fast as ground truth, half means we tracked at half speed,
and sync means we produced a syncopated (? phase error) beat track. A blank entry means
our algorithm produced the correct beat track. A star (?) means that our result incorrectly
switches phase or tempo. The ? values are after compensating for the qualitative error (if
any). The fifth column shows a histogram of the absolute phase error (0 to ?); this is also
Classical piano
Piano; rubato at end
Modern string quartet
Classical orchestra
Jazz instrumental
Jazz instrumental
Jazz vocal
Solo guitar
Solo guitar
Guitar and voice
Acoustic
Newfoundland folk
Cuban
Changes time signature
Rock
Rock
Reggae
Punk
Pop-punk
Organic electronica
Ambient electronica
Electronica
Solo sitar
Indonesian gamelan
Indonesian gamelan
Glenn Gould / Bach Goldberg Var?ns 1982 / Var?n 1
Jeno Jand?o / Bach WTC / Fuga 2 (C Minor)
Kronos Quartet / Caravan / Aaj Ki Raat
Maurice Ravel / Piano Concertos / G Major - Presto
Miles Davis / Kind Of Blue / So What (edit)
Miles Davis / Kind Of Blue / Blue In Green
Holly Cole / Temptation / Jersey Girl
Don Ross / Passion Session / Michael Michael Michael
Don Ross / Huron Street / Luci Watusi
Tracy Chapman / For You
Ben Harper / Fight For Your Mind / Oppression
Great Big Sea / Up / Chemical Worker?s Song
Buena Vista Social Club / Chan Chan
Beatles / 1967-1970 / Lucy In The Sky With Diamonds
U2 / Joshua Tree / Where The Streets Have No Name (edit)
Cake / Fashion Nugget / I Will Survive
Sublime / Second-Hand Smoke / Thanx Dub (excerpt)
Rancid / ... And Out Come The Wolves / Old Friend
Green Day / Dookie / When I Come Around
Tortoise / TNT / A Simple Way To Go Faster Than Light...
Pole / 2 / Stadt
Underworld / A Hundred Days Off / MoMove
Ravi Shankar / The Sounds Of India / Bhimpalsi (edit)
Pitamaha: Music From Bali / Puri Bagus, Bamboo (excerpt)
Gamelan Sekar Jaya / Byomantara (excerpt)
double
half
sync
?
double
? threehalf
? sync
half
BP Perf.
71
79
71
86
89
81
79
82
75
79
70
79
72
42
82
57
78
40
70
59
88
77
75
44
61
BP ?
Phase Err
Table 1: The songs used in our evaluation. See the text for explanation.
Comment
Song
sync
double
half
sync
?
double
? threehalf
? sync
half
PF Perf.
71
79
67
89
88
80
79
79
74
79
68
78
72
41
82
59
77
42
69
61
86
77
71
50
59
PF ?
Phase Err
Tempo (BPM)
105
100
95
90
85
Smoothed ground truth
Predicted
Smoothed ground truth
Raw ground truth
Smoothed ground truth
Predicted
50 100 150 200 250
Time (s)
Figure 3: Tempo tracks for Cake / I Will Survive. Center: ?raw? ground-truth tempo (instantaneous tempo estimate based on the time between adjacent beats) and smoothed ground
truth (by averaging). Left: fixed-grid version result. Right: particle filter result.
after correcting for qualitative error. The remaining columns contain the same items for the
particle filter version.
Out of 25 examples, the fixed grid version produces the correct answer in 17 cases, tracks
at double speed in two cases, half speed in two cases, syncopated in one case, and in three
cases produces a track that (incorrectly) switches tempo or phase. The particle filter version
produces 16 correct answers, two double-speed, two half-speed, two syncopated, and the
same three ?switching? tracks.
An example of a successful tempo track is shown in Figure 3.
The result for Lucy In The Sky With Diamonds (one of the ?switching? results) is worth
examination. The song switches time signature between 3/4 and 4/4 time a total of five
times; see Figure 4. Our results follow the time signature change the first three times.
On the fourth change (from 4/4 to 3/4), it tracks at 2/3 the ground truth rate instead. We
note an interesting effect when we examine the final message that is passed during belief
propagation. This message tells us the maximum probability of a sequence that ends with
each state. The global maximum corresponds to the beat track shown in the left plot. The
local maximum near 50 BPM corresponds to an alternate solution in which, rather than
tracking the quarter notes, we produce one beat per measure; this track is quite plausible.
Indeed, the ?true? track is difficult for human listeners. Note also that there is also a local
maximum near 100 BPM but phase-shifted a half beat. This is the solution in which the
beats are syncopated from the true result.
8
Conclusions and Further Work
We present a graphical model for beat tracking and evaluate it on a set of varied and difficult examples. We achieve good results that are comparable with those reported by other
researchers, although direct comparisons are impossible without a shared data set.
There are several advantages to formulating the problem in a probabilistic setting. The beat
tracking problem has inherent ambiguity and multiple interpretations are often plausible.
With a probabilistic model, we can produce several candidate solutions with different probabilities. This is particularly useful for situations in which beat tracking is one element in
a larger machine listening application. Probabilistic graphical models allow flexible and
powerful handling of uncertainty, and allow local and contextual information to interact in
a principled manner. Additional domain knowledge and constraints can be added in a clean
and principled way. The adoption of an efficient dual tree recursion for graphical models
100
140
90
Tempo (BPM)
Tempo (BPM)
95
85
80
75
70
1/2 Ground Truth
2/3 Ground Truth
Predicted
65
60
20
40
60
80
100
Time (s)
120
140
160
180
200
100
75
50
40
0
0.2
0.4
0.6
Phase offset
0.8
Figure 4: Left: Tempo tracks for Lucy In The Sky With Diamonds. The vertical lines
mark times at which the time signature changes between 3/4 and 4/4. Right: the last maxmessage computed during belief propagation. Bright means high probability. The global
maximum corresponds to the tempo track shown. Note the local maximum around 50
BPM, which corresponds to an alternate feasible result. See the text for discussion.
[7, 8] enables us to carry out inference in real time.
We would like to investigate several modifications of our model and inference methods.
Longer-range tempo smoothness constraints as suggested by [11] could be useful. The
extraction of MAP sets of parameters for several qualitatively different solutions would
help to express the ambiguity of the problem. The particle filter could also be changed.
At present, we first perform a full particle filtering sweep and then run max-BP. Taking
into account the quality of the partial MAP solutions during particle filtering might allow
superior results by directing more particles toward regions of the state space that are likely
to contain the final MAP solution. Since we know that our probability terrain is multimodal, a mixture particle filter would be useful [12].
References
[1] S Dixon. An empirical comparison of tempo trackers. Technical Report TR-2001-21, Austrian
Research Institute for Artificial Intelligence, Vienna, Austria, 2001.
[2] E D Scheirer. Tempo and beat analysis of acoustic musical signals. J. Acoust. Soc. Am.,
103(1):588?601, Jan 1998.
[3] M Goto. An audio-based real-time beat tracking system for music with or without drum-sounds.
Journal of New Music Research, 30(2):159?171, 2001.
[4] A T Cemgil and H J Kappen. Monte Carlo methods for tempo tracking and rhythm quantization.
Journal of Artificial Intelligence Research, 18(1):45?81, 2003.
[5] J Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. MorganKaufmann, 1988.
[6] S J Godsill, A Doucet, and M West. Maximum a posteriori sequence estimation using Monte
Carlo particle filters. Ann. Inst. Stat. Math., 53(1):82?96, March 2001.
[7] A G Gray and A W Moore. ?N-Body? problems in statistical learning. In Advances in Neural
Information Processing Systems 4, pages 521?527, 2000.
[8] M Klaas, D Lang, and N de Freitas. Fast maximum a posteriori inference in monte carlo state
space. In AI-STATS, 2005.
[9] L Greengard and J Strain. The fast Gauss transform. SIAM Journal of Scientific Statistical
Computing, 12(1):79?94, 1991.
[10] D LaLoudouana and M B Tarare. Data set selection. Presented at NIPS Workshop, 2002.
[11] A T Cemgil, B Kappen, P Desain, and H Honing. On tempo tracking: Tempogram representation and Kalman filtering. Journal of New Music Research, 28(4):259?273, 2001.
[12] J Vermaak, A Doucet, and Patrick P?erez. Maintaining multi-modality through mixture tracking.
In ICCV, 2003.
| 2745 |@word version:7 middle:1 instrumental:2 seek:1 pulse:2 vermaak:1 tr:1 carry:1 kappen:3 series:1 bc:1 puri:1 freitas:2 err:2 blank:1 contextual:3 discretization:1 nt:3 lang:2 si:1 must:2 klaas:1 enables:1 designed:1 plot:1 aside:1 half:9 intelligence:2 item:1 desktop:1 short:3 coarse:1 quantized:1 node:2 club:1 math:1 simpler:1 five:3 direct:1 psfrag:1 qualitative:4 sync:6 manner:3 indeed:1 expected:4 rapid:1 p1:1 examine:1 roughly:1 multi:1 discretized:1 compensating:1 passion:1 detects:1 globally:1 automatically:1 window:1 pf:2 becomes:3 begin:1 notation:1 what:1 kind:2 string:2 developed:1 mp3:1 extremum:1 finding:2 indonesian:2 acoust:1 certainty:1 sky:3 y3:1 ti:3 tarare:1 unit:3 t1:3 positive:1 local:13 timing:1 cemgil:4 limit:1 switching:2 tapping:2 approximately:1 scheirer:2 might:1 emphasis:1 twice:1 co:2 limited:1 range:2 adoption:1 x3:1 digit:1 jan:1 empirical:1 thought:2 organic:1 vocal:1 symbolic:1 get:2 close:1 selection:1 shankar:1 risk:1 impossible:1 optimize:1 map:6 center:1 go:1 duration:2 focused:1 stats:1 correcting:1 perceive:1 fade:1 his:1 searching:1 imagine:1 cleaning:1 neighbouring:1 goldberg:1 element:1 particularly:1 observed:1 capture:1 region:1 bass:1 trade:1 chord:1 principled:4 ideally:1 signature:4 segment:1 division:1 basis:1 girl:1 easily:1 joint:2 multimodal:1 differently:1 jersey:1 listener:4 genre:1 vista:1 derivation:1 fast:6 describe:3 monte:3 artificial:2 tell:2 quite:3 aaj:1 larger:2 plausible:3 transform:4 noisy:1 final:2 sequence:2 pressing:1 advantage:1 rock:2 product:2 adaptation:1 remainder:1 honing:1 achieve:3 description:3 double:7 optimum:1 sea:1 produce:9 generating:2 ben:1 help:2 friend:1 stat:1 measured:1 nearest:1 minor:1 p2:2 soc:1 c:1 predicted:7 come:4 direction:1 foot:2 correct:4 filter:10 nando:2 human:8 bin:5 stretch:1 around:3 considered:2 ground:14 tracker:1 exp:2 great:1 major:2 vary:1 sought:1 perceived:2 estimation:1 jazz:3 label:1 ross:2 cole:1 edit:3 clearly:1 gaussian:7 rather:6 avoid:1 temptation:1 jaya:1 opted:1 am:1 posteriori:5 inference:7 inst:1 typically:2 fight:1 hidden:1 bpm:8 compatibility:2 dual:2 flexible:1 priori:1 smoothing:1 summed:1 fairly:1 marginal:2 extraction:1 sampling:1 chapman:1 look:1 survive:2 nearly:1 t2:4 np:2 report:1 inherent:2 intelligent:1 modern:1 composed:3 individual:1 phase:24 replacement:2 hann:1 detection:1 message:3 investigate:1 punk:2 evaluation:3 adjust:1 certainly:1 introduces:1 mixture:2 light:1 tj:1 solo:3 ambient:1 edge:1 worker:1 partial:1 folk:1 tree:3 old:1 logarithm:1 circle:1 column:4 phrase:1 cost:2 pole:1 entry:1 hundred:2 uniform:1 successful:1 reported:2 answer:2 varies:1 periodic:1 considerably:1 peak:1 siam:1 probabilistic:7 off:2 michael:3 ambiguity:3 recorded:3 maurice:1 account:1 potential:7 distribute:2 de:2 star:1 includes:1 dixon:3 piece:1 performed:1 view:1 break:2 observing:1 portion:1 nugget:1 bright:1 musical:7 variance:2 spaced:1 correspond:2 yield:1 landscape:1 raw:5 produced:3 dub:1 carlo:3 trajectory:1 worth:1 researcher:2 finer:1 explain:2 influenced:1 centering:1 energy:10 frequency:1 gain:2 austria:1 knowledge:4 reggae:1 day:2 follow:1 done:1 though:1 just:1 stage:2 tortoise:1 hand:1 beatles:1 ei:2 propagation:9 lack:1 smoke:1 yf:1 quality:1 gray:1 scientific:1 name:2 effect:1 contain:2 y2:1 true:4 holly:1 assigned:1 chemical:1 moore:1 mile:2 adjacent:1 sin:2 during:3 ambiguous:1 davis:2 rhythm:3 unnormalized:1 m:1 reasoning:1 instantaneous:1 laloudouana:1 common:1 superior:1 multinomial:1 quarter:1 overview:1 tracked:1 discussed:1 he:1 occurred:1 interpretation:1 ai:1 smoothness:9 tuning:1 consistency:1 stft:1 grid:6 session:1 particle:16 erez:1 stable:1 impressive:1 longer:1 etc:1 add:1 patrick:1 own:1 chan:2 optimizing:1 psychoacoustical:1 binary:1 continue:1 success:1 yi:3 joshua:1 greater:1 additional:1 maximize:1 period:2 signal:17 multiple:1 sound:3 full:1 technical:1 xf:1 adapt:1 faster:2 bach:2 wtc:1 divided:1 equally:1 austrian:1 metric:1 histogram:1 tracy:1 addition:1 want:1 fine:3 modality:1 file:1 comment:1 goto:2 undirected:2 flow:1 mod:1 seem:1 structural:1 near:5 switch:4 fit:1 drum:5 listening:2 shift:1 passed:1 song:25 locating:1 passing:1 prefers:1 matlab:1 useful:3 tune:1 repeating:1 slide:1 locally:1 band:1 processed:2 generate:1 exist:1 millisecond:3 shifted:1 per:4 track:17 blue:3 discrete:8 express:3 key:1 drawn:1 clean:1 ravi:1 backward:1 sum:1 convert:1 run:6 compete:1 uncertainty:2 you:1 fourth:1 powerful:1 place:1 arrive:1 excerpt:3 scaling:1 comparable:2 ki:1 strength:3 occur:2 placement:1 constraint:5 precisely:2 your:1 bp:5 x2:2 nearby:1 aspect:1 fourier:2 speed:5 formulating:2 relatively:1 gould:1 department:1 percussive:1 according:1 alternate:2 march:1 describes:8 labellings:1 modification:1 iccv:1 agree:1 mind:1 know:1 instrument:2 end:2 presto:1 greengard:1 tempo:33 neighbourhood:1 voice:2 cake:2 convolve:1 clustering:1 remaining:1 graphical:12 maintaining:1 vienna:1 music:15 classical:2 sweep:1 intend:1 added:1 occurs:1 spike:1 strategy:6 desain:1 loudness:2 distance:1 street:2 fy:1 reason:1 toward:1 quartet:2 length:1 kalman:1 difficult:8 godsill:1 implementation:1 perform:2 diamond:3 discretize:1 vertical:1 observation:9 beat:63 incorrectly:2 situation:1 extended:1 strain:1 frame:11 y1:1 varied:2 tnt:1 smoothed:5 directing:1 jand:1 tap:1 acoustic:2 pop:2 pearl:1 nip:1 suggested:1 usually:1 perception:1 pattern:1 below:1 electronica:3 tb:5 max:3 green:2 explanation:1 belief:10 shifting:1 power:1 event:13 examination:1 recursion:2 improve:1 brief:2 perf:2 extract:4 columbia:1 naive:1 embodied:1 speeding:1 text:2 prior:1 piano:5 vancouver:1 relative:1 expect:2 interesting:2 suggestion:1 filtering:3 var:2 bank:2 pi:1 summary:1 extrapolated:1 changed:1 last:1 silence:1 allow:5 india:1 wide:1 template:3 taking:2 institute:1 rhythmic:1 fifth:1 absolute:1 boundary:1 forward:2 collection:1 qualitatively:1 far:1 social:1 approximate:1 midi:4 keep:1 global:5 doucet:2 corpus:1 consuming:1 xi:5 alternatively:1 spectrum:3 don:2 continuous:4 terrain:1 glenn:1 table:2 ca:1 interact:1 investigated:1 domain:4 big:1 resonator:1 x1:2 body:1 west:1 fashion:1 n:3 wish:2 syncopated:4 candidate:1 third:1 dustin:1 british:1 minute:1 invalidating:1 offset:13 list:2 guitar:4 naively:1 workshop:1 quantization:3 labelling:1 smoothly:3 lucy:3 likely:2 tracking:23 u2:1 ubc:1 truth:14 determines:1 wolf:1 extracted:1 corresponds:4 buena:1 ann:1 labelled:3 shared:1 considerable:1 change:5 feasible:2 included:1 characterised:2 determined:1 uniformly:2 averaging:1 called:2 total:1 pas:1 invariance:1 gauss:2 subdivision:2 attempted:1 newfoundland:1 pragmatic:1 mark:1 harper:1 meant:2 morgankaufmann:1 incorporate:1 evaluate:2 audio:8 tested:1 phenomenon:1 handling:1 |
1,922 | 2,746 | Common-Frame Model for Object Recognition
Pierre Moreels
Pietro Perona
California Insitute of Technology - Pasadena CA91125 - USA
pmoreels,[email protected]
Abstract
A generative probabilistic model for objects in images is presented. An
object consists of a constellation of features. Feature appearance and
pose are modeled probabilistically. Scene images are generated by drawing a set of objects from a given database, with random clutter sprinkled
on the remaining image surface. Occlusion is allowed.
We study the case where features from the same object share a common
reference frame. Moreover, parameters for shape and appearance densities are shared across features. This is to be contrasted with previous
work on probabilistic ?constellation? models where features depend on
each other, and each feature and model have different pose and appearance statistics [1, 2]. These two differences allow us to build models
containing hundreds of features, as well as to train each model from a
single example. Our model may also be thought of as a probabilistic
revisitation of Lowe?s model [3, 4].
We propose an efficient entropy-minimization inference algorithm that
constructs the best interpretation of a scene as a collection of objects and
clutter. We test our ideas with experiments on two image databases. We
compare with Lowe?s algorithm and demonstrate better performance, in
particular in presence of large amounts of background clutter.
1 Introduction
There is broad agreement in the machine vision literature that objects and object categories
should be represented as collections of features or parts with distinctive appearance and
mutual position [1, 2, 4, 5, 6, 7, 8, 9]. A number of ideas for efficient detection algorithms
(find instances of a given object category, e.g. faces) have been proposed by virtually all
the cited authors, far fewer for recognition (list all objects and their pose in a given image)
where matching would ideally take a logarithmic time with respect to the number of available models [3, 4]. Learning of parameters characterizing features shape or appearance
is still a difficult area, with most authors opting for heavy human intervention (typically
segmentation and alignment of the training examples, although [1, 2, 3] train without supervision) and very large training sets for object categories (typically in the order of 10 3 104 , although [10] recently demonstrated learning categories from 1-10 examples).
This work is based on two complementary efforts: the deterministic recognition system
proposed by Lowe [3, 4], and the probabilistic constellation models by Perona and collaborators [1, 2]. The first line of work has three attractive characteristics: objects are
represented with hundreds of features, thus increasing robustness; models are learned from
a single training example; last but not least, recognition is efficient with databases of hundreds of objects. The drawback of Lowe?s approach is that both modeling decisions and
algorithms rely on heuristics, whose design and performance may be far from optimal in
Figure 1: Diagram of our recognition model showing database, test image and two competing hypotheses. To avoid a cluttered diagram, only one partial hypothesis is displayed for each hypothesis.
The predicted position of models according to the hypotheses are overlaid on the test image.
some circumstances. Conversely, the second line of work is based on principled probabilistic object models which yield principled and, in some respects, optimal algorithms
for learning and recognition/detection. Unfortunately, the large number of parameters employed in each model limit in practice the number of features being used and require many
training examples. By recasting Lowe?s model and algorithms in probabilistic terms, we
hope to combine the advantages of both methods. Besides, in this paper we choose to focus
on individual objects as in [3, 4] rather than on categories as in [1, 2].
In [11] we presented a model aimed at the same problem of individual object recognition. A major difference with the work described here lies in the probabilistic treatment of
hypotheses, which allows us here to use directly hypothesis likelihood as a guide for the
search, instead of the arbitrary admissible heuristic required by A*.
2
Probabilistic framework and notations
Each model object is represented as a collection of features. Features are informative parts
extracted from images by an interest point operator. Each model is the set of features
extracted from one training image of a given object - although this could be generalized to
features from many images of the same object. Models are indexed by k and denoted by
mk , while indices i and j are used respectively for features extracted from the test image
and from model images: f i denotes the i ? th test feature, while f jk denotes the j ? th
feature from the k ? th model. The features extracted from model images (training set)
form the database. A feature detected in a test image can be a consequence of the presence
of a model object in the image, in which case it should be associated to a feature from the
database. In the alternative, this feature is attributed to a clutter - or background - detection.
The geometric information associated to each feature contains position information (x and
y coordinates, denoted by the vector x), orientation (denoted by ?) and scale (denoted by
?). It is denoted by X i = (x, ?i , ?i ) for test feature f i and Xjk = (xkj ?jk , ?jk ) for model
feature fjk . This geometric information is measured relatively to the standard reference
frame of the image in which the feature has been detected. All features extracted from the
same image share the same reference frame.
The appearance information associated to a feature is a descriptor characterizing the local
image appearance near this feature. The measured appearance information is denoted by
Ai for test feature f i and Akj for model feature f jk . In our experiments, features are detected
at multiple scales at the extrema of difference-of-gaussians filtered versions of the image [4,
12]. The SIFT descriptor [4] is then used to characterize the local texture about keypoints.
A partial hypothesis h explains the observations made in a fraction of the test image. It
combines a model image m h and a corresponding set of pose parameters X h . Xh encodes
position, rotation, scale (this can easily be extended to affine transformations). We assume
independence between partial hypotheses. This requires in particular independence between models. Although reasonable, this approximation is not always true (e.g. a keyboard
is likely to be detected close to a computer screen). This allows us to search in parallel for
multiple objects in a test image.
A hypothesis H is the combination of several partial hypotheses, such that it explains completely the observations made in the test image. A special notation H 0 or h0 denotes any
(partial) hypothesis that states that no model object is present in a given fraction of the test
image, and that features that could have been detected there are due to clutter.
Our objective is to find which model objects are present in the test scene, given the observations made in the test scene and the information that is present in the database. In
probabilistic terms, we look for hypotheses H for which the likelihood ration LR(H) =
P (H|{fi },{fjk })
P (H0 |{fi },{fjk })
> 1. This ratio characterizes how well models and poses specified by H
explain the observations, as opposed to them being generated by clutter. Using Bayes rules
and after simplifications,
P ({fi }|{fjk }, H) ? P (H)
P (H|{fi }, {fjk })
=
(1)
LR(H) =
P (H0 |{fi }, {fjk })
P ({fi }|{fjk }, H0 ) ? P (H0 )
where we used P ({f jk }|H) = P ({fjk }) since the database observations do not depend on
the current hypothesis.
A key assumption of this work is that once the pose parameters of the objects (and thus
their reference frames) are known, the geometric configuration and appearance of the test
features are independent from each other. We also assume independence between features
associated to models and features associated to clutter detections, as well
as independence
between separate clutter detections. Therefore, P ({f i }|{fjk }, H) = i P (fi |{fjk }, H).
These assumptions of independence are also made in [13], and undelying in [4].
Assignment vectors v represent matches between features from the test scene, and model
features or clutter. The dimension of each assignment vector is the number of test features
ntest . Its i ? th component v(i) = (k, j) denotes that the test feature f i is matched to
fv(i) = fjk , j ? th feature from model m k . v(i) = (0, 0) denotes the case where f i is
attributed to clutter. The set V H of assignment vectors compatible with a hypothesis H are
those that assign test features only to models present in H (and to clutter). In particular, the
only assignment vector compatible
? with h 0 is v0 such that ?i, v0 (i) = (0, 0). We obtain
?
LR(H) =
P (fi |fv(i) , mh , Xh )
P (H) ?
? (2)
P (v|{fjk }, mh , Xh ) ?
P (H0 )
P (fi |h0 )
v?VH h?H
i|fi ?h
P (H) is a prior on hypotheses, we assume it is constant. The term P (v|{f jk }, mh , Xh ) is
discussed in 3.1, we now explore the other terms.
?P (fi |fv(i) , mh , Xh ) : fi and fv(i) are believed to be one and the same feature. Differences
measured between them are noise due to the imaging system as well as distortions caused
by viewpoint or lighting conditions changes. This noise probability p n encodes differences
in appearance of the descriptors, but also in geometry, i.e. position, scale, orientation
Assuming independence between appearance information and geometry information,
pn (fi |fjk , mh , Xh ) = pn,A (Ai |Av(i) , mh , Xh ) ? pn,X (Xi |Xv(i) , mh , Xh )
(3)
Figure 2: Snapshots from the iterative matching process. Two competing hypotheses are displayed
(top and bottom row) a) Each assignment vector contains one assignment, suggesting a transformation
(red box) b) End of iterative process. The correct hypothesis is supported by numerous matches and
high belief, while the wrong hypothesis has only a weak support from few matches and low belief.
The error in geometry is measured by comparing the values observed in the test image,
with the predicted values that would be observed if the model features were to be transformed according to the parameters X h . Let?s denote by X h (xv(i) ),Xh (?v(i) ),Xh (?v(i) )
those predicted values, the geometry part of the noise probability can be decomposed into
pn,X (Xi |Xv(i) , h) = pn,x (xi , Xh (xv(i) )) ? pn,? (?i , Xh (?v(i) )) ? pn,? (?i , Xh (?v(i) )) (4)
?P (fi |h0 ) is a density on appearance and position of clutter detections, denoted by p bg (fi ).
We can decompose this density as well into an appearance term and a geometry term:
pbg (fi ) = pbg,A (Ai ) ? pbg,X (Xi ) = pbg,A (Ai ) ? pbg,(x) (xi ) ? pbg,? (?i ) ? pbg,? (?i )
(5)
pbg,A , pbg,(x) (xi ) pbg,? (?i ), pbg,? (?i ) are densities that characterize, for clutter detections,
appearance, position, scale and rotation respectively.
Out of lack of space, and since it is not the main focus of this paper, we will not go into the
details of how the ?foreground density? p n and the ?background density? p bg are learned.
The main assumption is that those densities are shared across features, instead of having
one set of parameters for each feature as in [1, 2]. This results in an important decrease of
the number of parameters to be learned, at a slight cost in the model expressiveness.
3 Search for the best interpretation of the test image
The building block of the recognition process is a question, comparing a feature from a
database model with a feature of the test image. A question selects a feature from the
database, and tries to identify if and where this feature appears in the test image.
3.1
Assignment vectors compatible with hypotheses
For a given hypothesis H, the set of possible assignment vectors V H is too large for explicit
exploration. Indeed, each potential match can either be accepted or rejected, which creates
a combinatorial explosion. Hence, we approximate the summation in (2) by its largest
term. In particular, each assignment vector v and each model referenced in v implies a
set of pose parameters X v (extracted e.g. with least-squares fitting). Therefore, the term
P (v|{fjk }, mh , Xh ) from (2) will be significant only when X v ? Xh , i.e. when the pose
implied by the assignment vector agrees with the pose specified by the partial hypothesis.
We consider only the assignment vectors v for which X v ? Xh . P (vH |{fjk }, h) is assumed
to be close to 1. Eq.(2) becomes
LR(H) ?
P (H) P (fi |fvh (i) , mh , Xh )
P (H0 )
P (fi |h0 )
h?H i|fi ?h
(6)
Our recognition system proceeds by asking questions sequentially and adding matches to
assignment vectors. It is therefore natural to define, for a given hypothesis H and the
corresponding assignment vector v H and t ? ntest , the belief in vH by
pn (ft |fv(t) , mht , Xht )
B0 (vH ) = 1, Bt (vH ) =
? Bt?1 (vH )
(7)
pbg (ft |h0 )
The geometric part of the belief (cf.(3)-(5) characterizes how close the pose X v implied by
the assignments is to the pose X h specified by the hypothesis. The geometric component
of the belief characterizes the quality of the appearance match for the pairs (f i , fv(i) ).
3.2 Entropy-based optimization
Our goal is finding quickly the hypothesis that best explains the observations, i.e. the hypothesis (models+poses) that has the highest likelihood ratio. We compute such hypothesis
incrementally by asking questions sequentially. Each time a question is asked we update
the beliefs. We stop the process and declare a detection (i.e. a given model is present in
the image) as soon as the belief of a corresponding hypothesis exceeds a given confidence
threshold. The speed with which we reach such a conclusion depends on choosing cleverly
the next question. A greedy strategy says that the best next question is the one that takes us
closest to a detection decision. We do so by considering the entropy of the vector of beliefs
(the vector may be normalized to 1 so that each belief is in fact a probability): the lower the
entropy the closer we are to a detection. Therefore we study the following heuristic: The
most informative next question is the one that minimizes the expectation of the entropy of
our beliefs. We call this strategy ?minimum expected entropy? (MEE). This idea is due to
Geman et al. [14].
Calculating the MEE question is, unfortunately, a complex and expensive calculation in
itself. In Monte-Carlo simulations of a simplified version of our problem we notice that
the MEE strategy tends to ask questions that relate to the maximum-belief hypothesis.
Therefore we approximate the MEE strategy with a simple heuristic: The next question
consists of attempting to match one feature of the highest-belief model; specifically, the
feature with best appearance match to a feature in the test image.
3.3 Search for the best hypotheses
In an initialization step, a geometric hash table [3, 6, 7] is created by discretizing the space
of possible transformations Note that we add only partial hypotheses in a hypothesis one at
a time, which allows us to discretize only the space of partial hypotheses (models + poses),
instead of discretizing the space of combinations of partial hypotheses.
Questions to be examined are created by pairing database features to the test features closest in terms of appearance. Note that since features encode location, orientation and scale,
any single assignment between a test feature and a model feature contains enough information to characterize a similarity transformation. It is therefore natural to restrict the set
of possible transformations to similarities, and to insert each candidate assignment in the
corresponding geometric hash table entry. This forms a pool of candidate assignments. The
set of hypotheses is initialized to the center of the hash table entries, and their belief is set
to 1. The motivation for this initialization step is to examine, for each partial hypothesis,
only a small number of candidate matches. A partial hypothesis corresponds to a hash table
entry, we consider only the candidate assignments that fall into this same entry.
Each iteration proceeds as follows. The hypothesis H that currently has the highest likelihood ratio is selected. If the geometric hash table entry corresponding to the current partial
hypothesis h, contains candidate assignments that have not been examined yet, one of them,
(fi , fjmh ) is picked - currently, the best appearance match - and the probabilities p bg (fi )
and pn (fi |fjmh , mh , Xh ) are computed. As mentioned in 3.1, only the best assignment
Figure 3: Results from our algorithm in various situations (viewpoint change can be seen in Fig.6).
Each row shows the best hypothesis in terms of belief. a) Occlusion b) Change of scale.
Figure 4: ROC curves for both experiments. The performance improvement from our probabilistic
formulation is particularly significant when a low false alarm rate is desired. The threshold used is
the repeatability rate defined in [15]
vector is explored: if p n (fi |fjmh , mh , Xh ) > pbg (fi ) the match is accepted and inserted in
the hypothesis. In the alternative, f i is considered a clutter detection and f jmh is a missed
detection. The belief B(v H ) and the likelihood ratio LR(H) are updated using (7).
After adding an assignment to a hypothesis, frame parameters X h are recomputed using
least-squares optimization, based on all assignments currently associated to this hypothesis. This parameter estimation step provides a progressive refinement of the model pose
parameters as assignments are added. Fig.2 illustrates this process.
The exploration of a partial hypothesis ends when no more candidate match is available in
the hash table entry. We proceed with the next best partial hypothesis. The search ends
when all test scene features have been matched or assigned to clutter.
4 Experimental results
4.1 Experimental setting
We tested our algorithm on two sets of images, containing respectively 49 and 161 model
images, and 101 and 51 test images (sets P M ? gadgets ? 03 and JP ? 3Dobjects ? 04
available from http : //www.vision.caltech.edu/html ? f iles/archive.html). Each
model image contained a single object. Test images contained from zero (negative examples) to five objects, for a total of 178 objects in the first set, and 79 objects in the second
set. A large fraction of each test image consists of background. The images were taken
with no precautions relatively to lighting conditions or viewing angle.
The first set contains common kitchen items and objects of everyday use. The second
set (Ponce Lab, UIUC) includes office pictures. The objects were always moved between
model images and test images. The images of model objects used in the learning stage
were downsampled to fit in a 500 ? 500 pixels box, the test images were downsampled to
800 ? 800 pixels. With these settings, the number of features generated by the features
detector was of the order of 1000 per training image and 2000-4000 per test image.
Figure 5: Behavior induced by clutter detections. A ground truth model was created by cutting a
rectangle from the test image and adding noise. The recognition process is therefore expected to
find a perfect match. The two rows show the best and second best model found by each algorithm
(estimated frame position shown by the red box, features that found a match are shown in yellow).
4.2 Results
Our probabilistic method was compared against Lowe?s voting approach on both sets of
images. We implemented Lowe?s algorithm following the details provided in [3, 4]. Direct
comparison of our approach to ?constellation? models [1, 2] is not possible as those require
many training samples for each class in order to learn shape parameters, while our method
learns from single examples. Recognition time for our unoptimized implementations was
10 seconds for Lowe?s algorithm and 25 seconds for our probabilistic method on a 2.8GHz
PC, both implementations used approximately 200MB of memory.
Both methods yielded similar detection rates for simple scenes. In challenging situations
with multiple objects or textured clutter, our method performs a more systematic check on
geometric consistency by updating likelihoods every time a match is added. Hypotheses
starting with wrong matches due to clutter don?t find further supporting matches, and are
easily discarded by a threshold based on the number of matches. Conversely, Lowe?s algorithm checks geometric consistency as a last step of the recognition process, but needs to
allow for a large slop in the transformation parameters. Spurious matches induced by clutter detections may still be accepted, thus leading to the acceptance of incorrect hypotheses.
An example of this behavior is displayed in Fig.5: the test image consists of a picture
of concrete. A rectangular patch was extracted from this image, noise was added to this
patch, and it was inserted in the database as a new model. With our algorithm, the best
hypothesis found the correct match with the patch of concrete, its best contender doesn?t
succeed in collecting more than one correspondence and is discarded. In Lowe?s case,
other models manage to accumulate a high number of correspondences induced by texture
matches among clutter detections. Although the first correspondence concerns the correct
model, it contains wrong matches. Moreover, the model displayed in the second row leads
to a false alarm supported by many matches.
Fig.4 displays receiver-operating curves (ROC) for both tests sets, obtained for our probabilistic system and Lowe?s method. Both curves confirm that our probabilistic interpretation leads to less false alarms than Lowe?s method for a same detection rate.
5 Conclusion
We have proposed an object recognition method that combines the benefits of a set of rich
features with those of a probabilistic model of features positions and appearance. The use
of large number of features brings robustness with respect to occlusions and clutter. The
probabilistic model verifies the validity of candidate hypotheses in terms of appearance and
geometric configuration. Our system improves upon a state-of-the art recognition method
based on strict feature matching. In particular, the rate of false alarms in the presence
Figure 6: Sample scenes and training objects from the two sets of images. Recognized frame poses
are overlayed in red.
of textured backgrounds generating strong erroneous matches, is lower. This is a strong
advantage in real-world situations, where a ?clean? background is not always available.
References
[1] M. Weber, M. Welling and P. Perona, ?Unsupervised Learning of Models for Recognition?, Proc.
Europ. Conf. Comp. Vis., 2000.
[2] R. Fergus, P. Perona, A. Zisserman, ?Object Class Recognition by Unsupervised Scale-invariant
Learning?, IEEE. Conf. on Comp. Vis. and Patt. Recog., 2003.
[3] D.G. Lowe, ?Object Recognition from Local Scale-invariant Features?, ICCV,1999
[4] D.G. Lowe, ?Distinctive Image Features from Scale-Invariant Keypoints?, Int. J. Comp. Vis.,
60(2), pp. 91-110, 2004.
[5] G. Carneiro and A. Jepson ?Flexible Spatial Models for Grouping Local Image Features?, IEEE.
Conf. on Comp. Vis. and Patt. Recog., 2004.
[6] I. Rigoutsos and R. Hummel ?A Bayesian Approach to Model Matching with Geometric Hashing?, CVIU, 62(1), pp. 11-26, 1995.
[7] W.E.L. Grimson and D.P. Huttenlocher, ?On the Sensitivity of Geometric Hashing?, ICCV, 1990
[8] H. Rowley, S. Baluja, T. Kanade, ?Neural Network-based Face Detection?, IEEE. Trans. Patt.
Anal. Mach. Int., 20(1):pp. 23-38, 1998.
[9] P. Viola and M. Jones, ?Rapid Object Detection Using a Boosted Cascade of Simple Features?,
Proc. IEEE Conf. Comp. Vis. Patt. Recog., 2001.
[10] L. Fei-Fei, R. Fergus, P. Perona. ?Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories? CVPR, 2004.
[11] P. Moreels, M. Maire, P. Perona, ?Recognition by Probabilistic Hypothesis Construction?, Proc.
8th Europ. Conf. Comp. Vision, Prague, Czech Republic, pp.55-68, 2004
[12] T. Lindeberg, ?Scale-space Theory: a Basic Tool for Analising Structures at Different Scales?,
J. Appl. Stat., 21(2), pp.225-270, 1994.
[13] A.R. Pope and D.G. Lowe, ?Probabilistic Models of Appearance for 3-D Object Recognition?,
Int. J. Comp. Vis., 40(2), pp. 149-167, 2000.
[14] D. Geman and B. Jedynak, ?An Active Testing Model for Tracking Roads in Satellite Images?,
IEEE. Trans. Patt. Anal. Mach. Int.,18(1) pp. 1 - 14,1996
[15] C. Schmid, R. Mohr, C. Bauckhage?, ?Comparing and Evaluating Interest Points?, Proc. of 6th
Int. Conf. Comp. Vis., Bombay, India, 1998.
| 2746 |@word version:2 mee:4 simulation:1 configuration:2 contains:6 current:2 comparing:3 yet:1 recasting:1 informative:2 shape:3 update:1 hash:6 precaution:1 generative:2 fewer:1 greedy:1 selected:1 item:1 lr:5 filtered:1 provides:1 location:1 five:1 direct:1 pairing:1 incorrect:1 consists:4 combine:3 fitting:1 expected:2 indeed:1 rapid:1 behavior:2 examine:1 uiuc:1 decomposed:1 lindeberg:1 considering:1 increasing:1 becomes:1 provided:1 moreover:2 notation:2 matched:2 minimizes:1 extremum:1 transformation:6 finding:1 every:1 collecting:1 voting:1 wrong:3 intervention:1 declare:1 local:4 referenced:1 tends:1 xv:4 limit:1 consequence:1 mach:2 mohr:1 approximately:1 initialization:2 examined:2 conversely:2 challenging:1 appl:1 jedynak:1 testing:1 practice:1 block:1 maire:1 area:1 thought:1 cascade:1 matching:4 confidence:1 road:1 downsampled:2 close:3 operator:1 www:1 deterministic:1 demonstrated:1 center:1 go:1 starting:1 cluttered:1 rectangular:1 rule:1 coordinate:1 updated:1 construction:1 hypothesis:49 agreement:1 recognition:19 jk:6 expensive:1 particularly:1 updating:1 geman:2 database:12 huttenlocher:1 bottom:1 observed:2 ft:2 inserted:2 recog:3 decrease:1 highest:3 principled:2 mentioned:1 grimson:1 rowley:1 ideally:1 ration:1 asked:1 depend:2 distinctive:2 creates:1 upon:1 completely:1 textured:2 easily:2 mh:11 represented:3 various:1 mht:1 carneiro:1 train:2 insitute:1 monte:1 detected:5 choosing:1 h0:11 whose:1 heuristic:4 cvpr:1 distortion:1 drawing:1 say:1 statistic:1 itself:1 advantage:2 propose:1 mb:1 moved:1 everyday:1 satellite:1 generating:1 perfect:1 incremental:1 object:39 stat:1 pose:15 measured:4 b0:1 eq:1 strong:2 implemented:1 predicted:3 europ:2 implies:1 drawback:1 correct:3 exploration:2 human:1 viewing:1 explains:3 require:2 assign:1 collaborator:1 decompose:1 summation:1 insert:1 considered:1 ground:1 overlaid:1 major:1 estimation:1 proc:4 combinatorial:1 currently:3 largest:1 agrees:1 tool:1 minimization:1 hope:1 always:3 rather:1 avoid:1 pn:9 boosted:1 probabilistically:1 office:1 encode:1 focus:2 ponce:1 improvement:1 likelihood:6 check:2 inference:1 typically:2 bt:2 perona:7 pasadena:1 spurious:1 transformed:1 unoptimized:1 selects:1 pixel:2 among:1 orientation:3 html:2 denoted:7 flexible:1 art:1 special:1 spatial:1 mutual:1 construct:1 once:1 having:1 progressive:1 broad:1 look:1 unsupervised:2 jones:1 foreground:1 few:2 individual:2 kitchen:1 geometry:5 occlusion:3 hummel:1 overlayed:1 detection:19 interest:2 acceptance:1 alignment:1 pc:1 closer:1 partial:14 explosion:1 indexed:1 initialized:1 desired:1 xjk:1 mk:1 instance:1 modeling:1 asking:2 bombay:1 assignment:23 cost:1 republic:1 entry:6 hundred:3 too:1 characterize:3 contender:1 density:7 cited:1 sensitivity:1 akj:1 probabilistic:18 systematic:1 pool:1 quickly:1 concrete:2 jmh:1 manage:1 containing:2 choose:1 opposed:1 conf:6 leading:1 suggesting:1 potential:1 includes:1 int:5 caused:1 bg:3 depends:1 vi:7 try:1 lowe:15 picked:1 lab:1 characterizes:3 red:3 bayes:1 parallel:1 square:2 descriptor:3 characteristic:1 yield:1 identify:1 repeatability:1 yellow:1 weak:1 bayesian:2 carlo:1 lighting:2 comp:8 explain:1 detector:1 reach:1 against:1 pp:7 associated:6 attributed:2 stop:1 treatment:1 ask:1 improves:1 segmentation:1 appears:1 hashing:2 zisserman:1 formulation:1 box:3 rejected:1 stage:1 opting:1 lack:1 incrementally:1 brings:1 quality:1 usa:1 building:1 validity:1 normalized:1 true:1 hence:1 assigned:1 attractive:1 generalized:1 demonstrate:1 performs:1 image:51 weber:1 recently:1 fi:24 xkj:1 common:3 rotation:2 jp:1 discussed:1 interpretation:3 slight:1 accumulate:1 significant:2 ai:4 consistency:2 supervision:1 surface:1 v0:2 similarity:2 add:1 operating:1 closest:2 keyboard:1 discretizing:2 caltech:2 seen:1 minimum:1 employed:1 recognized:1 multiple:3 keypoints:2 exceeds:1 match:24 calculation:1 believed:1 basic:1 vision:4 circumstance:1 expectation:1 iteration:1 represent:1 background:6 pbg:13 diagram:2 archive:1 strict:1 induced:3 virtually:1 prague:1 call:1 slop:1 near:1 presence:3 enough:1 independence:6 fit:1 competing:2 restrict:1 idea:3 effort:1 proceed:1 aimed:1 amount:1 clutter:21 category:6 http:1 notice:1 estimated:1 per:2 patt:5 key:1 recomputed:1 threshold:3 clean:1 rectangle:1 imaging:1 pietro:1 fraction:3 angle:1 reasonable:1 missed:1 patch:3 decision:2 simplification:1 display:1 correspondence:3 yielded:1 fei:2 scene:8 encodes:2 speed:1 attempting:1 relatively:2 according:2 combination:2 cleverly:1 across:2 invariant:3 iccv:2 taken:1 end:3 available:4 gaussians:1 pierre:1 alternative:2 robustness:2 denotes:5 remaining:1 top:1 cf:1 calculating:1 build:1 implied:2 objective:1 question:12 added:3 strategy:4 separate:1 assuming:1 besides:1 modeled:1 index:1 ratio:4 difficult:1 unfortunately:2 relate:1 negative:1 design:1 implementation:2 anal:2 discretize:1 av:1 observation:6 snapshot:1 discarded:2 displayed:4 supporting:1 situation:3 extended:1 viola:1 frame:8 arbitrary:1 expressiveness:1 pair:1 required:1 specified:3 california:1 fv:6 learned:3 czech:1 trans:2 proceeds:2 memory:1 belief:15 moreels:2 natural:2 rely:1 technology:1 numerous:1 picture:2 created:3 fjk:15 schmid:1 vh:6 prior:1 literature:1 geometric:13 affine:1 viewpoint:2 share:2 heavy:1 row:4 compatible:3 supported:2 last:2 soon:1 guide:1 allow:2 india:1 fall:1 face:2 characterizing:2 ghz:1 benefit:1 curve:3 dimension:1 world:1 evaluating:1 rich:1 doesn:1 author:2 collection:3 made:4 refinement:1 simplified:1 far:2 welling:1 approximate:2 cutting:1 confirm:1 sequentially:2 active:1 receiver:1 assumed:1 xi:6 fergus:2 don:1 search:5 iterative:2 table:6 kanade:1 learn:1 complex:1 jepson:1 main:2 motivation:1 noise:5 alarm:4 verifies:1 allowed:1 complementary:1 gadget:1 fig:4 pope:1 roc:2 screen:1 position:9 explicit:1 xh:19 lie:1 candidate:7 learns:1 admissible:1 erroneous:1 showing:1 sift:1 constellation:4 list:1 explored:1 concern:1 grouping:1 false:4 adding:3 texture:2 illustrates:1 cviu:1 entropy:6 logarithmic:1 appearance:21 likely:1 explore:1 visual:1 contained:2 tracking:1 bauckhage:1 corresponds:1 truth:1 extracted:7 succeed:1 goal:1 shared:2 change:3 specifically:1 baluja:1 contrasted:1 total:1 ntest:2 accepted:3 experimental:2 revisitation:1 support:1 tested:2 |
1,923 | 2,747 | Responding to modalities with different latencies
Fredrik Bissmarck
Computational Neuroscience Labs
ATR International
Hikari-dai 2-2-2, Seika, Soraku
Kyoto 619-0288 JAPAN
[email protected]
Hiroyuki Nakahara
Laboratory for Mathematical Neuroscience
RIKEN Brain Science Institute
Hirosawa 2-1-1, Wako
Saitama 351-0198 JAPAN
[email protected]
Kenji Doya
Initial Research Project
Okinawa Institute of Science and Technology
12-22 Suzaki, Gushikawa
Okinawa 904-2234 JAPAN
[email protected]
Okihide Hikosaka
Laboratory of Sensorimotor Research
National Eye Institute, NIH
Building 49, Room 2A50
Bethesda, MD 20892
[email protected]
Abstract
Motor control depends on sensory feedback in multiple modalities with
different latencies. In this paper we consider within the framework of reinforcement learning how different sensory modalities can be combined
and selected for real-time, optimal movement control. We propose an
actor-critic architecture with multiple modules, whose output are combined using a softmax function. We tested our architecture in a simulation of a sequential reaching task. Reaching was initially guided by
visual feedback with a long latency. Our learning scheme allowed the
agent to utilize the somatosensory feedback with shorter latency when
the hand is near the experienced trajectory. In simulations with different
latencies for visual and somatosensory feedback, we found that the agent
depended more on feedback with shorter latency.
1
Introduction
For motor response, the brain relies on several modalities. These may carry different information. For example, vision keeps us updated on external world events, while somatosensation gives us detailed information about the state of the motor system. For most human
behaviour, both are crucial for optimal performance.
However, modalities may also differ in latency. For example, information may be perceived
faster by the somatosensory pathway than the visual. For quick responses it would be
reasonable that the modality with shorter latency is more important. The slower modality
would be useful if it carries additional information, for example when we have to attend to
a visual cue.
There has been a lot of research on modular organisation where each module is an expert of
a particular part of the state space (e.g. [1]). We address questions concerning modules with
different feedback delays, and how they are used for real-time motor control. How does the
latency affect the influence of a modality over action? How can modalities be combined?
Here, we propose an actor-critic framework, where modules compete for influence over
action by reinforcement. First, we present the generic framework and learning algorithm.
Then, we apply our model to a visuomotor sequence learning task, and give details of the
simulation results.
2
General framework
This section describes the generic concepts
of our model: a set of modules with delayed feedback, a function for combining
them and a learning algorithm.
2.1
Network architecture
Consider M modules, where each module
has its own feedback signal ym (x(t ? ? m ))
(m = 1, 2, .., M ) computed from the state
of the environment x(t). Each module has
a corresponding time delay ? m (see figure
1). (The same feedback signals are used
to compute the critic, see the next subsection). Each module outputs a populationcoded output am (t), where each element
am
j (j = 1, 2, ..J) corresponds to the motor output vector uj , which represents, for
example, joint torques. The output of an
actor is given by a function approximator
am (t) = f(ym (t ? ? m ); wm ) with parameters wm .
Figure 1: The general framework.
The actual motor command u ? RD is given by combination of population vector outputs
am of the modules. Here we consider the use of softmax combination. The probablity of
taking j-th motor output vector is given by
P
M
exp ? m=1 am
j
,
P
?j (t) = P
J
M
m
j=1 exp ?
m=1 aj
where ? is the inverse temperature, controlling the stochasticity. At each moment, one of
the motor command vectors is selected as p(u(t) = u
? j ) = ?j (t). We define q(t) to be
a binary vector of J elements where the one corresponding to the chosen action is 1 and
others 0.
There is no explicit mechanism in the architecture that explicitly favour a module with
shorter latency. Instead, we test whether a reinforcement learning algorithm can learn to
select the modules which are more useful to the agent.
2.2
Learning algorithm
Our model is a form of the continuous actor-critic [2]. The function of the critic is to
estimate the expected future reward, i.e. to learn the value function
V = V (y1 (t ? ? 1 ), y2 (t ? ? 2 ), .., yM (t ? ? M ); wc )
where wc is a set of trainable parameters. The temporal difference (TD) error ? T D is the
discrepancy between expected and actual reward r(t). In its continuous form:
1
V (t) + V? (t)
?TD
is the future reward discount time constant.
? T D (t) = r(t) ?
where ? T D
The TD error is used to update the parameters for both the critic, and the actor, which in
our framework is the set of modules.
Learning of each actor module is guided by the action deviation signal
(qj (t) ? ?j (t))2
2
which is the difference between the its output and the action that was actually selected.
Ej (t) =
Parameters of the critic and actors are updated using eligibility traces
1
?V
e? ck (t) = ? eck +
?
?wkc
?Ej (t)
1 c
e? m
kj (t) = ? ekj +
m
?
?wkj
where k is the index of parameters and ? is a time constant. The trace for m-th actor is
given from
?Ej (t)
??j (t)
m = (qj (t) ? ?j (t)) ?w m
?wkj
kj
The parameters are updated by gradient descent as
w? kc = ?? T D (t)eck (t)
m
w? kj
= ?? T D (t)em
kj (t)
where ? denotes the learning rate.
2.3
Neuroanatomical correlates
Our network architecture is modeled to resemble the function of the basal gangliathalamocortical (BG-TC) system to select and learn actions for goal-directed movement.
Actor-critic models of the basal ganglia has been proposed by many (e.g. [3], [4]). The
modular organisation of the BG-TC loop circuits ([5], [6]), where modules depends on
different sensory feedback, implies that the actor-critic depends on several modules.
3
An example application
To demonstrate our paradigm we exemplify by a motor sequence learning task, inspired by
?the n x n task?, an experimental paradigm where monkeys and humans learn a sequence
of reaching movements, where error performance improved across days, and performance
time decreased across months [7]. The results from these experiments suggested that the
influence of the motor BG-TC loop for motor execution is relatively stronger for learned
sequences than for new ones, compared to the prefrontal BG-TC loop. In our model implementation, we want to investigate how the feedback delay affects the influence of visual
and sensorimotor modalities when learning a stereotype real-time motor sequence. In our
implementation (see figure 2), we use two modules, one ?visual?, and one ?motor?, corresponding to visual and somatosensory feedback respectively. The visual module represents
a preknown, visually guided reaching policy for arbitrary start and endpoints within reach.
This module does not learn. The motor module represents the motor skill memory to be
learned. It gives zero output initially, but learn by associating reinforcement with sequences
of actions. The controlled object is a 2DOF arm, for which the agent gives a joint torque
motor command, with action selection sampled at 100 Hz.
3.1
Environment
The environment consists of a 2DOF arm
(both links are 0.3 m long and 0.1 m in
diam., weight 1.0 kg), starting at position
S, directly controlled by the agent, and a
variable target (see environment box in figure 2). The task is to press three targets
in consecutive order, which always appear
at the same positions (one at one time),
marked 1, 2 and 3 in the figure. If the hand
of the arm satisfy a proximity condition
(|? target ? ? hand | < ? prox and |??hand | <
v prox ) a key (target) is considered pressed,
and the next target appears immediately.
To allow a larger possibility of modifying
the movement, we have a very loose velocity constraint v prox (for all simulations,
? prox = 0.02 m and v prox = 0.5 m/s ).
Each trial ended after successful completion of the task, or after 5 s.
For each successful key press, the agent is
rewarded instantaneously, with an increasing amount of reward for later keys in the
sequence (50, 100, 150 respectively). A
small, constant running time cost (10/s)
was subtracted from the reward function
r(t).
3.2
The visual module
The visual module is designed as a computed torque feedback controller for simplicity. It was designed to give an output
as similar as possible to biological reaching movements, but we did not attempt to
design the controller itself in a biologically
plausible way.
Figure 2: Implementation of the example simulation. The visual module is
fed back the hand position {?1hand , ?2hand }
and the position of the active target
{?1target , ?2target }, while the motor module
is fed back a population code representing the joint angles {?1 , ?2 }. S : Start, G
: Goal. See text for further details.
The feedback signal yv to the visual module consists of the hand kinematics ? hand ,
??hand and the target position ? target . Using
a computed torque feedback control law,
the visual module uses these signals to generate a reaching movement, representing the
preknown motor behaviour of the agent. As such a control law does not have measures to deal with delayed signals, we make the assumption that the control law relies
on ??hand (t) = ? hand (t), i.e. the controller can predict for the delay regarding the arm
movement (the target signal is still delayed by ? v . This is a limitation of our example, but
is a necessity to avoid ?motor babbling ?, for which learning time would be infinitely long.
The controller output
u? visual (t) = ?
1
? CT
hand
?
uvisual (t) + ?uvisual0 (??
hand
?
, ??
, e)
where ? CT and ? are constants, e = ? target (t ? ? v ) ? ??hand (t) and
?hand
? hand
? hand
uvisual0 (t) = JT (M(??
+ K1 ??
? K2 e) + C??
)
where J is the Jacobian (??/? ??hand ), M the moment of inertia matrix and C the Coriolis
matrix. With proper control gains K1 and K2 , the filter helps to give bell-shaped velocity
profiles for the reaching movement, desirable for its resemblance to biological motion.
The output uvisual is then expanded to a population vector
avj (t) =
1
1 X uvisual (t) ? u
?jd 2
exp(? { ( d
) })
00
Z
2
?jd
d
where Z is the normalisation term, u
?jd is a preferable joint torque for Cartesian dimension
00
d for vector element j, ?jd
the corresponding variance.
Parameters: ? CT = 50 ms, ? = 100, K1 = [10 0;0 10], K2 = [50 0;0 50]. The prefered
joint torques u
? j corresponding to action j were distributed symmetrically over the origin
in a 5x5 grid, in the range (-100:100,-100:100) with the middle (0,0) unit removed. The
00
corresponding variances ?jd
were half the distance to the closest node in each direction.
3.3
The motor module
The motor module relies on information about the motor state of the arm. In the vicinity
of a target, by the immediate motor state alone it may be difficult to determine whether
the hand should move towards or away from the target position. We solve this by adding
contextual neurons. These neurons fire after a particular key is pressed.
Thus, the feedback signal ym with k = 1, 2, .., K is partitioned by K0 : The first part
(k ? K0 ) represents the motor state, and the second part (k > K0 ) represents the context.
The feedback to the motor module are the joint angles and angular velocities ?, ?? of the
arm, expanded to a population vector with K0 elements:
ykm (t) =
1 nX ? (t) ? ??
X ??d (t) ? ?
1
? kd 2 o
d
kd 2
exp ?
(
) +
(
)
0
Z
2
?kd
?kd
d
d
0
where ??kd , ??kd are preferable joint angles and velocities, ?kd and ?kd
are corresponding
variances, Z is a normalisation term.
The context units are a number of n = 1, 2, .., N tapped delay lines (where N correspond
to the number of keys in the sequence), where each delay line has Q units. For (k > K 0 ,
k 6= K0 + Q(n ? 1) + 1):
y? km (t) = ?
1 m
y (t) + yk?1 (t)
?C k
Each delay line is initiated by the input at (k = K0 + Q(n ? 1) + 1):
ykm (t) = ?(t ? ?nkeypress )
where ? is the Dirac delta function, and ?nkeypress is the instant the nth key was pressed.
The response signal am is the linear combination of ym and the trainable matrix Wm ,
am (t) = Wm ym (t ? ? m )
Though it is reasonable to use both feedback pathways for the critic, for simplicity we use
only the motor:
V (t) = Wc ym (t ? ? m )
Parameters: The prefered joint angles ??kd and angular velocities ?
? kd were distributed
uniformly in a 7*7*3*3 grid (K0 = 441 nodes) for k = 1, 2, ..K0 nodes , in the ranges
0
(-0.2:1.2,1,2:1.6) rad and (-1:1,-1:1) rad/s. The corresponding variances ? kd and ?kd
were
half the distance to the closest node in each direction. The contextual part of the vector has
Q = 8, N = 3, which makes 24 elements. The time constant ? C = 30 ms.
4
Simulation results
We trained the model for four different feedback delay pairs (? v / ? m , in ms): 100/0,
100/50, 100/100, 0/100 (? = 10, ? T D = 200 ms, ? = 200 ms, ? = 0.1 s?1 ). We stopped
the simulations after 125,000 trials. Two properties are essential for our argument: the
shortest feedback delay ? min = min(? v , ? m ) and the relative latency ?? = (? v ? ? m ).
Figure 3: (Left) Change in performance time (running averages) across trials for different
feedback delays (displayed in ms as visual/motor). (Right) Example hand trajectories for
the initial (gray lines) and learned (black lines) behaviour for the run with 100 ms/0 ms
delay.
4.1
Final performance time depends on the shortest latency
Figure 3 shows that the performance time (PT, the time it takes to complete one trial) was
improved for all four simulations. The final PT relates to the shortest latency ? min , the
shorter the better final performance.
However, there are three possible reasons for speedup: 1) a more deterministic (greedy)
policy ?, 2) a change in trajectory and 3) faster reaction by utilization of faster feedback. As
we observed more stereotyped trajectories and more deterministic policies after learning,
reason 1) is true, but does it account for the entire improvement? For the rather exploratory,
visually guided initial movement, the average PT is 1.55 s and 1.25 s for ? v = 100 ms and
? v = 0 ms respectively, while the corresponding greedy policy PTs are 1.41 s and 1.13 s.
Since the final PTs always were lower, the speedup must also be due to other changes
in behaviour. Figure 3 (right) shows example trajectories of the inital (gray) and learned
(black) policy in 100/0. We see that while the initial movement was directed target-bytarget, the learned displays a smoothly curved movement, optimized to perform the entire
sequence. This is expected, as the discounted reward (determined by ? T D ) and time cost
favour fast movements over slow. This change was to some degree observed in all four
simulations, although it was most evident (see the next subsection) in the 100/0. So reason
2) also seems to be true. We also see that the shorter ? min , the shorter final PT. Reason 3)
is also significant: the possibility to speed up the movement is limited by ? min .
Figure 4: Performance after learning with typical examples of hand trajectories in a normal
condition, and a condition with the visual module turned off, for agents with different feedback delay. Average performance times are displayed for each. When the visual module
was turned off, the agent often failed to complete the sequence in 5 s. Success rate are
shown in parantheses, and the corresponding average are for the successful trials only. The
solid lines highlight the trajectory while execution is stable, while the dashed lines show
the parts when the agent is out of control.
4.2
The module with shorter latency is more influential over motor control
Figure 4 shows the performance of sufficiently learned behaviour (after 125,000 trials)
for two conditions: one normal (?condition 1?) and one with the visual module turned off
(?condition 2?). Condition 1 is shown mainly for reference. The difference in trajectories in
condition 1 are marginal, but execution tends to destabilize with longer ? min . Condition 2
reveals the dependence of the visual module. In the 100/0 case, the correct spatial trajectory
is generated each time, but a sometimes too fast movement leads to overshoots for 2nd and
3rd keys. For smaller ?? (rightwards in figure 4) the execution becomes unstable, and the
0/100 case it could never execute the movement. For some reason, when the 100/100 kept
the hand on track, it was less likely to do overshoots than the 100/50 case, which is why
the average PT and success rate is better.
Thus, we conclude that the faster module are more influential over motor control. The
adaptiveness of the motor loop also offer the motor module an advantage over the visual.
5
Conclusion
Our framework offers a natural way to combine modules with different feedback latencies.
In any particular situation, the learning algorithm will reinforce the better module to use.
When execution is fast, the module with shorter latency may be favourable, and when slow,
the one with more information. For example, in vicinity of the experienced sequence, our
agent utilized the somatosensory feedback to execute the movement more quickly, but once
it lost control the visual feedback was needed to put the arm back on track again.
By using the softmax function it is possible to flexibly gate or combine module outputs.
Sometimes the asynchrony of modules can cause the visual and motor modules to be directed towards different targets. Then it is desirable to suppress the slower module to favour
the faster, which also occured in our example by reinforcing the motor module enough to
suppress the visual. In other situations the reliability of one module may be insufficient for
robust execution, making it necessary to combine modules.
In our 100/0 example, the slower visual module was used to assist the faster motor module
to learn a skill. Once acquired, the visual module was not necessary for the skillful execution anymore, unless something went wrong. Thus, the visual module is more free to attend
to other tasks. When we learn to ride a bicycle, for example, we first need to attend to what
we do, but once we have learned, we can attend to other things, like the surrounding traffic
or a conversation. Our result suggests that a longer relative latency helps to make the faster
modality independent, so the slower can be decoupled from execution after learning.
In the human brain, forward models are likely to have access to an efference copy of the
motor command, which may be more important than the incoming feedback for fast movements [1]. This is something we intend to look at in future work. Also, we will extend this
work with a more theoretical analysis, and compare the performance of multiple adaptive
modules.
Acknowledgements
The research is supported by CREST. The authors would like to thank Erhan Oztop and
Jun Morimoto for helpful comments.
References
[1] M. Haruno, D. M. Wolpert, and M. Kawato. Mosaic model for sensorimotor learning and control.
Neural Comput, 13(10):2201?20, 2001.
[2] K. Doya. Reinforcement learning in continuous time and space. Neural Comput, 12(1):219?45,
2000.
[3] K. Doya. What are the computations of the cerebellum, the basal ganglia and the cerebral cortex?
Neural Netw, 12(7-8):961?974, 1999.
[4] N. Daw. Reinforcement learning models of the dopamine system and their behavioral implications. PhD thesis, Carnegie Mellon University, 2003.
[5] G. E. Alexander and M. D. Crutcher. Functional architecture of basal ganglia circuits: neural
substrates of parallel processing. Trends Neurosci, 13(7):266?71, 1990.
[6] H. Nakahara, K. Doya, and O. Hikosaka. Parallel cortico-basal ganglia mechanisms for acquisition and execution of visuomotor sequences - a computational approach. J Cogn Neurosci,
13(5):626?47, 2001.
[7] O. Hikosaka, H. Nakahara, M. K. Rand, K. Sakai, X. Lu, K. Nakamura, S. Miyachi, and K. Doya.
Parallel neural networks for learning sequential procedures. Trends Neurosci, 22(10):464?71,
1999.
| 2747 |@word trial:6 middle:1 stronger:1 seems:1 nd:1 km:1 simulation:9 pressed:3 solid:1 carry:2 moment:2 necessity:1 initial:4 wako:1 reaction:1 contextual:2 must:1 motor:35 designed:2 update:1 alone:1 cue:1 selected:3 half:2 greedy:2 probablity:1 node:4 mathematical:1 consists:2 pathway:2 combine:3 behavioral:1 acquired:1 expected:3 seika:1 brain:4 torque:6 inspired:1 discounted:1 td:3 gov:1 actual:2 eck:2 increasing:1 becomes:1 project:1 circuit:2 what:2 kg:1 monkey:1 ended:1 temporal:1 preferable:2 k2:3 wrong:1 control:12 unit:3 utilization:1 appear:1 attend:4 tends:1 depended:1 initiated:1 black:2 suggests:1 limited:1 range:2 directed:3 lost:1 coriolis:1 cogn:1 procedure:1 bell:1 selection:1 put:1 context:2 influence:4 deterministic:2 quick:1 starting:1 flexibly:1 simplicity:2 immediately:1 oh:1 population:4 exploratory:1 updated:3 controlling:1 target:16 pt:7 substrate:1 us:1 mosaic:1 origin:1 tapped:1 element:5 velocity:5 trend:2 utilized:1 wkc:1 a50:1 observed:2 module:51 went:1 movement:17 prefered:2 removed:1 yk:1 environment:4 reward:6 okinawa:2 trained:1 overshoot:2 joint:8 k0:8 riken:2 surrounding:1 skillful:1 fast:4 visuomotor:2 dof:2 whose:1 modular:2 larger:1 plausible:1 solve:1 efference:1 itself:1 haruno:1 final:5 sequence:12 advantage:1 propose:2 turned:3 combining:1 loop:4 dirac:1 object:1 help:2 completion:1 kenji:1 fredrik:1 somatosensory:5 implies:1 resemble:1 differ:1 stereotype:1 guided:4 direction:2 correct:1 modifying:1 filter:1 human:3 behaviour:5 biological:2 proximity:1 sufficiently:1 considered:1 normal:2 exp:4 visually:2 bicycle:1 predict:1 consecutive:1 perceived:1 instantaneously:1 always:2 reaching:7 ck:1 avoid:1 ej:3 rather:1 command:4 improvement:1 mainly:1 am:7 helpful:1 irp:1 entire:2 initially:2 kc:1 spatial:1 softmax:3 marginal:1 once:3 never:1 shaped:1 represents:5 look:1 future:3 discrepancy:1 others:1 national:1 delayed:3 fire:1 attempt:1 normalisation:2 investigate:1 possibility:2 implication:1 necessary:2 shorter:9 decoupled:1 unless:1 theoretical:1 stopped:1 cost:2 deviation:1 saitama:1 delay:12 successful:3 too:1 combined:3 international:1 off:3 ym:7 quickly:1 hirosawa:1 again:1 thesis:1 prefrontal:1 external:1 expert:1 japan:3 account:1 prox:5 satisfy:1 explicitly:1 bg:4 depends:4 later:1 lot:1 lab:1 traffic:1 wm:4 start:2 yv:1 parallel:3 morimoto:1 variance:4 correspond:1 lu:1 trajectory:9 reach:1 sensorimotor:3 acquisition:1 sampled:1 gain:1 subsection:2 exemplify:1 occured:1 conversation:1 hiroyuki:1 actually:1 back:3 appears:1 day:1 response:3 improved:2 rand:1 execute:2 box:1 though:1 angular:2 lsr:1 hand:23 aj:1 gray:2 asynchrony:1 resemblance:1 building:1 concept:1 true:2 y2:1 vicinity:2 laboratory:2 deal:1 cerebellum:1 x5:1 eligibility:1 m:10 evident:1 complete:2 demonstrate:1 motion:1 temperature:1 nih:2 kawato:1 functional:1 endpoint:1 jp:3 cerebral:1 extend:1 significant:1 mellon:1 inital:1 rd:2 grid:2 stochasticity:1 reliability:1 ride:1 stable:1 actor:10 longer:2 cortex:1 access:1 something:2 closest:2 own:1 rewarded:1 binary:1 success:2 dai:1 additional:1 determine:1 paradigm:2 shortest:3 dashed:1 signal:9 babbling:1 multiple:3 desirable:2 relates:1 kyoto:1 faster:7 hikosaka:3 long:3 ykm:2 offer:2 concerning:1 controlled:2 controller:4 vision:1 dopamine:1 sometimes:2 want:1 decreased:1 wkj:2 modality:11 crucial:1 comment:1 hz:1 thing:1 near:1 symmetrically:1 enough:1 affect:2 architecture:6 associating:1 regarding:1 qj:2 favour:3 whether:2 assist:1 reinforcing:1 soraku:1 cause:1 action:9 useful:2 latency:17 detailed:1 amount:1 discount:1 generate:1 neuroscience:2 delta:1 track:2 carnegie:1 basal:5 key:7 four:3 destabilize:1 utilize:1 kept:1 compete:1 inverse:1 angle:4 run:1 reasonable:2 doya:6 ct:3 display:1 constraint:1 wc:3 speed:1 argument:1 min:6 expanded:2 hiro:1 relatively:1 speedup:2 influential:2 combination:3 kd:12 describes:1 across:3 em:1 smaller:1 partitioned:1 ekj:1 bethesda:1 biologically:1 making:1 avj:1 loose:1 mechanism:2 kinematics:1 needed:1 fed:2 apply:1 away:1 generic:2 anymore:1 subtracted:1 slower:4 gate:1 jd:5 neuroanatomical:1 responding:1 denotes:1 running:2 instant:1 k1:3 uj:1 move:1 intend:1 question:1 dependence:1 md:1 gradient:1 distance:2 link:1 reinforce:1 atr:2 thank:1 nx:1 unstable:1 reason:5 code:1 index:1 modeled:1 insufficient:1 difficult:1 trace:2 suppress:2 implementation:3 design:1 proper:1 policy:5 perform:1 neuron:2 descent:1 curved:1 displayed:2 immediate:1 situation:2 somatosensation:1 y1:1 nei:1 arbitrary:1 pair:1 trainable:2 optimized:1 rad:2 learned:7 daw:1 address:1 suggested:1 memory:1 event:1 natural:1 nakamura:1 arm:7 representing:2 scheme:1 nth:1 technology:1 eye:1 jun:1 kj:4 text:1 acknowledgement:1 relative:2 law:3 highlight:1 limitation:1 approximator:1 agent:11 degree:1 critic:10 supported:1 free:1 copy:1 allow:1 cortico:1 institute:3 taking:1 distributed:2 feedback:27 dimension:1 world:1 sakai:1 sensory:3 inertia:1 forward:1 reinforcement:6 adaptive:1 author:1 erhan:1 correlate:1 crest:1 skill:2 netw:1 keep:1 active:1 reveals:1 incoming:1 conclude:1 continuous:3 why:1 learn:8 robust:1 did:1 stereotyped:1 neurosci:3 profile:1 allowed:1 slow:2 experienced:2 position:6 explicit:1 comput:2 jacobian:1 jt:1 favourable:1 organisation:2 essential:1 sequential:2 adding:1 phd:1 execution:9 cartesian:1 smoothly:1 tc:4 wolpert:1 likely:2 infinitely:1 ganglion:4 visual:26 failed:1 corresponds:1 relies:3 goal:2 month:1 nakahara:3 diam:1 marked:1 towards:2 room:1 oist:1 change:4 determined:1 typical:1 uniformly:1 experimental:1 select:2 adaptiveness:1 alexander:1 tested:1 rightwards:1 |
1,924 | 2,748 | Distributed Occlusion Reasoning for Tracking
with Nonparametric Belief Propagation
Erik B. Sudderth, Michael I. Mandel, William T. Freeman, and Alan S. Willsky
Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
[email protected], [email protected], [email protected], [email protected]
Abstract
We describe a three?dimensional geometric hand model suitable for visual tracking applications. The kinematic constraints implied by the
model?s joints have a probabilistic structure which is well described by
a graphical model. Inference in this model is complicated by the hand?s
many degrees of freedom, as well as multimodal likelihoods caused by
ambiguous image measurements. We use nonparametric belief propagation (NBP) to develop a tracking algorithm which exploits the graph?s
structure to control complexity, while avoiding costly discretization.
While kinematic constraints naturally have a local structure, self?
occlusions created by the imaging process lead to complex interpendencies in color and edge?based likelihood functions. However, we show
that local structure may be recovered by introducing binary hidden variables describing the occlusion state of each pixel. We augment the NBP
algorithm to infer these occlusion variables in a distributed fashion, and
then analytically marginalize over them to produce hand position estimates which properly account for occlusion events. We provide simulations showing that NBP may be used to refine inaccurate model initializations, as well as track hand motion through extended image sequences.
1
Introduction
Accurate visual detection and tracking of three?dimensional articulated objects is a challenging problem with applications in human?computer interfaces, motion capture, and
scene understanding [1]. In this paper, we develop a probabilistic method for tracking a
geometric hand model from monocular image sequences. Because articulated hand models have many (roughly 26) degrees of freedom, exact representation of the posterior distribution over model configurations is intractable. Trackers based on extended and unscented Kalman filters [2, 3] have difficulties with the multimodal uncertainties produced
by ambiguous image evidence. This has motived many researchers to consider nonparametric representations, including particle filters [4, 5] and deterministic multiscale discretizations [6]. However, the hand?s high dimensionality can cause these trackers to suffer catastrophic failures, requiring the use of models which limit the hand?s motion [4] or sophisticated prior models of hand configurations and dynamics [5, 6].
An alternative way to address the high dimensionality of articulated tracking problems is
to describe the posterior distribution?s statistical structure using a graphical model. Graph-
Figure 1: Projected edges (left block) and silhouettes (right block) for a configuration of the 3D
structural hand model matching the given image. To aid visualization, the model is also projected
following rotations by 35? (center) and 70? (right) about the vertical axis.
ical models have been used to track view?based human body representations [7], contour models of restricted hand configurations [8], view?based 2.5D ?cardboard? models
of hands and people [9], and a full 3D kinematic human body model [10]. Because the
variables in these graphical models are continuous, and discretization is intractable for
three?dimensional models, most traditional graphical inference algorithms are inapplicable. Instead, these trackers are based on recently proposed extensions of particle filters
to general graphs: mean field Monte Carlo in [9], and nonparametric belief propagation
(NBP) [11, 12] in [10].
In this paper, we show that NBP may be used to track a three?dimensional geometric model
of the hand. To derive a graphical model for the tracking problem, we consider a redundant local representation in which each hand component is described by its own three?
dimensional position and orientation. We show that the model?s kinematic constraints,
including self?intersection constraints not captured by joint angle representations, take a
simple form in this local representation. We also provide a local decomposition of the
likelihood function which properly handles occlusion in a distributed fashion, a significant
improvement over our earlier tracking results [13]. We conclude with simulations demonstrating our algorithm?s robustness to occlusions.
2
Geometric Hand Modeling
Structurally, the hand is composed of sixteen approximately rigid components: three phalanges or links for each finger and thumb, as well as the palm [1]. As proposed by [2, 3],
we model each rigid body by one or more truncated quadrics (ellipsoids, cones, and cylinders) of fixed size. These geometric primitives are well matched to the true geometry of
the hand, allow tracking from arbitrary orientations (in contrast to 2.5D ?cardboard? models [5, 9]), and permit efficient computation of projected boundaries and silhouettes [3].
Figure 1 shows the edges and silhouettes corresponding to a sample hand model configuration. Note that only a coarse model of the hand?s geometry is necessary for tracking.
2.1
Kinematic Representation and Constraints
The kinematic constraints between different hand model components are well described
by revolute joints [1]. Figure 2(a) shows a graph describing this kinematic structure, in
which nodes correspond to rigid bodies and edges to joints. The two joints connecting the
phalanges of each finger and thumb have a single rotational degree of freedom, while the
joints connecting the base of each finger to the palm have two degrees of freedom (corresponding to grasping and spreading motions). These twenty angles, combined with the
palm?s global position and orientation, provide 26 degrees of freedom. Forward kinematic
transformations may be used to determine the finger positions corresponding to a given set
of joint angles. While most model?based hand trackers use this joint angle parameterization, we instead explore a redundant representation in which the ith rigid body is described
by its position qi and orientation ri (a unit quaternion). Let xi = (qi , ri ) denote this local
description of each component, and x = {x1 , . . . , x16 } the overall hand configuration.
Clearly, there are dependencies among the elements of x implied by the kinematic con-
(a)
(b)
(c)
(d)
Figure 2: Graphs describing the hand model?s constraints. (a) Kinematic constraints (EK ) derived from revolute joints. (b) Structural constraints (ES ) preventing 3D component intersections.
(c) Dynamics relating two consecutive time steps. (d) Occlusion consistency constraints (EO ).
straints. Let EK be the set of all pairs of rigid bodies which are connected by joints, or
equivalently the edges in the kinematic graph of Fig. 2(a). For each joint (i, j) ? EK ,
K
define an indicator function ?i,j
(xi , xj ) which is equal to one if the pair (xi , xj ) are valid
rigid body configurations associated with some setting of the angles of joint (i, j), and zero
otherwise. Viewing the component configurations xi as random variables, the following
prior explicitly enforces all constraints implied by the original joint angle representation:
Y
K
?i,j
(xi , xj )
(1)
pK (x) ?
(i,j)?EK
Equation (1) shows that pK (x) is an undirected graphical model, whose Markov structure
is described by the graph representing the hand?s kinematic structure (Fig. 2(a)).
2.2
Structural and Temporal Constraints
In reality, the hand?s joint angles are coupled because different fingers can never occupy
the same physical volume. This constraint is complex in a joint angle parameterization, but
simple in our local representation: the position and orientation of every pair of rigid bodies
must be such that their component quadric surfaces do not intersect.
We approximate this ideal constraint in two ways. First, we only explicitly constrain those
pairs of rigid bodies which are most likely to intersect, corresponding to the edges ES of the
graph in Fig. 2(b). Furthermore, because the relative orientations of each finger?s quadrics
are implicitly constrained by the kinematic prior pK (x), we may detect most intersections
based on the distance between object centroids. The structural prior is then given by
?
Y
1
||qi ? qj || > ?i,j
S
S
pS (x) ?
?i,j
(xi , xj )
?i,j
(xi , xj ) =
(2)
0
otherwise
(i,j)?ES
where ?i,j is determined from the quadrics composing rigid bodies i and j. Empirically, we
find that this constraint helps prevent different fingers from tracking the same image data.
In order to track hand motion, we must model the hand?s dynamics. Let xti denote the
position and orientation of the ith hand component at time t, and xt = {xt1 , . . . , xt16 }. For
each component at time t, our dynamical model adds a Gaussian potential connecting it to
the corresponding component at the previous time step (see Fig. 2(c)):
16
?
?
?
? Y
pT xt | xt?1 =
N xti ? xt?1
; 0, ?i
(3)
i
i=1
Although this temporal model is factorized, the kinematic constraints at the following time
step implicitly couple the corresponding random walks. These dynamics can be justified as
the maximum entropy model given observations of the nodes? marginal variances ?i .
3
Observation Model
Skin colored pixels have predictable statistics, which we model using a histogram distribution pskin estimated from training patches [14]. Images without people were used to create
a histogram model pbkgd of non?skin pixels. Let ?(x) denote the silhouette of projected
hand configuration x. Then, assuming pixels ? are independent, an image y has likelihood
Y
Y
Y pskin (u)
(4)
pC (y | x) =
pskin (u)
pbkgd (v) ?
pbkgd (u)
u??(x)
v??\?(x)
u??(x)
Q
The final expression neglects the proportionality constant v?? pbkgd (v), which is independent of x, and thereby limits computation to the silhouette region [8].
3.1
Distributed Occlusion Reasoning
In configurations where there is no self?occlusion, pC (y | x) decomposes as a product of
local likelihood terms involving the projections ?(xi ) of individual hand components [13].
To allow a similar decomposition (and hence distributed inference) when there is occlusion, we augment the configuration xi of each node with a set of binary hidden variables
zi = {zi(u) }u?? . Letting zi(u) = 0 if pixel u in the projection of rigid body i is occluded
by any other body, and 1 otherwise, the color likelihood (eq. (4)) may be rewritten as
16
16
Y
Y ? pskin (u) ?zi(u) Y
pC (y | x, z) =
=
pC (y | xi , zi )
(5)
pbkgd (u)
i=1
i=1
u??(xi )
Assuming they are set consistently with the hand configuration x, the hidden occlusion
variables z ensure that the likelihood of each pixel in ?(x) is counted exactly once.
We may enforce consistency of the occlusion variables using the following function:
?
0
if xj occludes xi , u ? ?(xj ), and zi(u) = 1
?(xj , zi(u) ; xi ) =
(6)
1
otherwise
Note that because our rigid bodies are convex and nonintersecting, they can never take
mutually occluding configurations. The constraint ?(xj , zi(u) ; xi ) is zero precisely when
pixel u in the projection of xi should be occluded by xj , but zi(u) is in the unoccluded state.
The following potential encodes all of the occlusion relationships between nodes i and j:
Y
O
?i,j
(xi , zi , xj , zj ) =
?(xj , zi(u) ; xi ) ?(xi , zj (u) ; xj )
(7)
u??
These occlusion constraints exist between all pairs
of nodes. As with the structural prior, we enforce
only those pairs EO (see Fig. 2(d)) most prone to
occlusion:
Y
O
pO (x, z) ?
?i,j
(xi , zi , xj , zj )
(8)
xj
z i(u)
(i,j)?EO
Figure 3 shows a factor graph for the occlusion
relationships between xi and its neighbors, as
well as the observation potential pC (y | xi , zi ).
The occlusion potential ?(xj , zi(u) ; xi ) has a very
weak dependence on xi , depending only on
whether xi is behind xj relative to the camera.
3.2
Modeling Edge Filter Responses
xk
y
xi
u ??
Figure 3:
Factor graph showing
p(y | xi , zi ), and the occlusion constraints placed on xi by xj , xk . Dashed
lines denote weak dependencies. The
plate is replicated once per pixel.
Edges provide another important hand tracking
cue. Using boundaries labeled in training images,
we estimated a histogram pon of the response of a derivative of Gaussian filter steered to
the edge?s orientation [8, 10]. A similar histogram poff was estimated for filter outputs at
randomly chosen locations. Let ?(x) denote the oriented edges in the projection of model
configuration x. Then, again assuming pixel independence, image y has edge likelihood
16
16
Y
Y ? pon (u) ?zi(u) Y
Y pon (u)
=
=
pE (y | xi , zi ) (9)
pE (y | x, z) ?
poff (u) i=1
poff (u)
i=1
u??(xi )
u??(x)
where we have used the same occlusion variables z to allow a local decomposition.
4
Nonparametric Belief Propagation
Over the previous sections, we have shown that a redundant, local representation of the
geometric hand model?s configuration xt allows p (xt | y t ), the posterior distribution of the
hand model at time t given image observations y t , to be written as
" 16
#
Y
? t t? X
t
t
t t
t
t t
t
t t
p x |y ?
pK (x )pS (x )pO (x , z )
pC (y | xi , zi )pE (y | xi , zi )
(10)
zt
i=1
The summation marginalizes over the hidden occlusion variables z t , which were needed to
locally decompose the edge and color likelihoods. When ? video frames are observed, the
overall posterior distribution is given by
?
Y
?
?
p (x | y) ?
p xt | y t pT (xt | xt?1 )
(11)
t=1
Excluding the potentials involving occlusion variables, which we discuss in detail in
Sec. 4.2, eq. (11) is an example of a pairwise Markov random field:
Y
Y
p (x | y) ?
?i,j (xi , xj )
?i (xi , y)
(12)
(i,j)?E
i?V
Hand tracking can thus be posed as inference in a graphical model, a problem we propose to
solve using belief propagation (BP) [15]. At each BP iteration, some node i ? V calculates
a message mij (xj ) to be sent to a neighbor j ? ?(i) , {j | (i, j) ? E}:
Z
Y
mnij (xj ) ?
?j,i (xj , xi ) ?i (xi , y)
mn?1
(13)
ki (xi ) dxi
xi
k??(i)\j
At any iteration, each node can produce an approximation p?(xi | y) to the marginal distribution p (xi | y) by combining the incoming messages with the local observation:
Y
p?n (xi | y) ? ?i (xi , yi )
mnji (xi )
(14)
j??(i)
For tree?structured graphs, the beliefs p?n (xi | y) will converge to the true marginals
p (xi | y). On graphs with cycles, BP is approximate but often highly accurate [15].
4.1
Nonparametric Representations
For the hand tracking problem, the rigid body configurations xi are six?dimensional continuous variables, making accurate discretization intractable. Instead, we employ nonparametric, particle?based approximations to these messages using the nonparametric belief
propagation (NBP) algorithm [11, 12]. In NBP, each message is represented using either a
sample?based density estimate (a mixture of Gaussians) or an analytic function. Both types
of messages are needed for hand tracking, as we discuss below. Each NBP message update
involves two stages: sampling from the estimated marginal, followed by Monte Carlo approximation of the outgoing message. For the general form of these updates, see [11]; the
following sections focus on the details of the hand tracking implementation.
The hand tracking application is complicated by the fact that the orientation component ri
of xi = (qi , ri ) is an element of the rotation group SO(3). Following [10], we represent
orientations as unit quaternions, and use a linearized approximation when constructing density estimates, projecting samples back to the unit sphere as necessary. This approximation
is most appropriate for densities with tightly concentrated rotational components.
4.2
Marginal Computation
BP?s estimate of the belief p?(xi | y) is equal to the product of the incoming messages from
neighboring nodes with the local observation potential (see eq. (14)). NBP approximates
this product using importance sampling, as detailed in [13] for cases where there is no
self?occlusion. First, M samples are drawn from the product of the incoming kinematic
and temporal messages, which are Gaussian mixtures. We use a recently proposed multiscale Gibbs sampler [16] to efficiently draw accurate (albeit approximate) samples, while
avoiding the exponential cost associated with direct sampling (a product of d M ?Gaussian
mixtures contains M d Gaussians). Following normalization of the rotational component,
each sample is assigned a weight equal to the product of the color and edge likelihoods
with any structural messages. Finally, the computationally efficient ?rule of thumb? heuristic [17] is used to set the bandwidth of Gaussian kernels placed around each sample.
To derive BP updates for the occlusion masks zi , we first cluster (xi , zi ) for each hand
component so that p (xt , z t | y t ) has a pairwise form (as in eq. (12)). In principle, NBP
could manage occlusion constraints by sampling candidate occlusion masks zi along with
rigid body configurations xi . However, due to the exponentially large number of possible
occlusion masks, we employ a more efficient analytic approximation.
Consider the BP message
sent from xj to (zi , xi ), calculated by applying eq. (13) to the
Q
occlusion potential u ?(xj , zi(u) ; xi ). We assume that p?(xj | y) is well separated from
any candidate xi , a situation typically ensured by the kinematic and structural constraints.
The occlusion constraint?s weak dependence on xi (see Fig. 3) then separates the message
computation into two cases. If xi lies in front of typical xj configurations, the BP message
?j,i(u) (zi(u) ) is uninformative. If xi is occluded, the message approximately equals
?j,i(u) (zi(u) = 0) = 1
?j,i(u) (zi(u) = 1) = 1 ? Pr [u ? ?(xj )]
(15)
where we have neglected correlations among pixel occlusion states, and where the probability is computed with respect to p?(xj | y). By taking the product of these messages
?k,i(u) (zi(u) ) from all potential occluders xk and normalizing, we may determine an approximation to the marginal occlusion probability ?i(u) , Pr[zi(u) = 0].
Because the color likelihood pC (y | xi , zi ) factorizes across pixels u, the BP approximation
to pC (y | xi ) may be written in terms of these marginal occlusion probabilites:
?
??
Y ?
pskin (u)
?i(u) + (1 ? ?i(u) )
pC (y | xi ) ?
(16)
pbkgd (u)
u??(xi )
Intuitively, this equation downweights the color evidence at pixel u as the probability of
that pixel?s occlusion increases. The edge likelihood pE (y | xi ) averages over zi similarly.
The NBP estimate of p?(xi | y) is determined by sampling configurations of xi as before,
and reweighting them using these occlusion?sensitive likelihood functions.
4.3
Message Propagation
To derive the propagation rule for non?occlusion edges, as suggested by [18] we rewrite
the message update equation (13) in terms of the marginal distribution p?(xi | y):
Z
p?n?1 (xi | y)
mnij (xj ) = ?
?j,i (xj , xi )
dxi
(17)
mn?1
(xi )
xi
ji
Our explicit use of the current marginal estimate p?n?1 (xi | y) helps focus the Monte Carlo
approximation on the most important regions of the state space. Note that messages sent
1
2
1
2
Figure 4: Refinement of a coarse initialization following one and two NBP iterations, both without
(left) and with (right) occlusion reasoning. Each plot shows the projection of the five most significant
modes of the estimated marginal distributions. Note the difference in middle finger estimates.
along kinematic, structural, and temporal edges depend only on the belief p?(xi | y) following marginalization over occlusion variables zi .
Details and pseudocode for the message propagation step are provided in [13]. For kinematic constraints, we sample uniformly among permissable joint angles, and then use
forward kinematics to propagate samples from p?n?1 (xi | y) /mn?1
(xi ) to hypothesized
ji
configurations of xj . Following [12], temporal messages are determined by adjusting the
bandwidths of the current marginal estimate p?(xi | y) to match the temporal covariance ?i .
Because structural potentials (eq. (2)) equal one for all state configurations outside some
ball, the ideal structural messages are not finitely integrable. We therefore approximate the
structural message mij (xj ) as an analytic function equal to the weights of all kernels in
p?(xi | y) outside a ball centered at qj , the position of xj .
5
Simulations
We now present a set of computational examples which investigate the performance of
our distributed occlusion reasoning; see [13] for additional simulations. In Fig. 4, we
use NBP to refine a coarse, user?supplied initialization into an accurate estimate of the
hand?s configuration in a single image. When occlusion constraints are neglected, the NBP
estimates associate the ring and middle fingers with the same image pixels, and miss the
true middle finger location. With proper occlusion reasoning, however, the correct hand
configuration is identified. Using M = 200 particles, our Matlab implementation requires
about one minute for each NBP iteration (an update of all messages in the graph).
Video
sequences
demonstrating the NBP hand tracker are available at
Selected frames from two of these sequences are
shown in Fig. 5, in which filtered estimates are computed by a single ?forward? sequence
of temporal message updates. The initial frame was approximately initialized manually.
The first sequence shows successful tracking through a partial occlusion of the ring finger
by the middle finger, while the second shows a grasping motion in which the fingers
occlude each other. For both of these sequences, rough tracking (not shown) is possible
without occlusion reasoning, since all fingers are the same color and the background is
unambiguous. However, we find that stability improves when occlusion reasoning is used
to properly discount obscured edges and silhouettes.
http://ssg.mit.edu/nbp/.
6
Discussion
Sigal et. al. [10] developed a three?dimensional NBP person tracker which models the
conditional distribution of each linkage?s location, given its neighbor, via a Gaussian mixture estimated from training data. In contrast, we have shown that an NBP tracker may
be built around the local structure of the true kinematic constraints. Conceptually, this has
the advantage of providing a clearly specified, globally consistent generative model whose
properties can be analyzed. Practically, our formulation avoids the need to explicitly approximate kinematic constraints, and allows us to build a functional tracker without the
need for precise, labelled training data.
Figure 5: Four frames from two different video sequences: a hand rotation containing finger occlusion (top), and a grasping motion (bottom). We show the projections of NBP?s marginal estimates.
We have described the graphical structure underlying a kinematic model of the hand, and
used this model to build a tracking algorithm using nonparametric BP. By appropriately
augmenting the model?s state, we are able to perform occlusion reasoning in a distributed
fashion. The modular state representation and robust, local computations of NBP offer a
solution particularly well suited to visual tracking of articulated objects.
Acknowledgments
The authors thank C. Mario Christoudias and Michael Siracusa for their help with video data collection, and Michael Black, Alexander Ihler, Michael Isard, and Leonid Sigal for helpful conversations.
This research was supported in part by DARPA Contract No. NBCHD030010.
References
[1] Y. Wu and T. S. Huang. Hand modeling, analysis, and recognition. IEEE Signal Proc. Mag.,
pages 51?60, May 2001.
[2] J. M. Rehg and T. Kanade. DigitEyes: Vision?based hand tracking for human?computer interaction. In Proc. IEEE Workshop on Non?Rigid and Articulated Objects, 1994.
[3] B. Stenger, P. R. S. Mendonca, and R. Cipolla. Model?based 3D tracking of an articulated hand.
In CVPR, volume 2, pages 310?315, 2001.
[4] J. MacCormick and M. Isard. Partitioned sampling, articulated objects, and interface?quality
hand tracking. In ECCV, volume 2, pages 3?19, 2000.
[5] Y. Wu, J. Y. Lin, and T. S. Huang. Capturing natural hand articulation. In ICCV, 2001.
[6] B. Stenger, A. Thayananthan, P. H. S. Torr, and R. Cipolla. Filtering using a tree?based estimator. In ICCV, pages 1063?1070, 2003.
[7] D. Ramanan and D. A. Forsyth. Finding and tracking people from the bottom up. In CVPR,
volume 2, pages 467?474, 2003.
[8] J. M. Coughlan and S. J. Ferreira. Finding deformable shapes using loopy belief propagation.
In ECCV, volume 3, pages 453?468, 2002.
[9] Y. Wu, G. Hua, and T. Yu. Tracking articulated body by dynamic Markov network. In ICCV,
pages 1094?1101, 2003.
[10] L. Sigal, M. Isard, B. H. Sigelman, and M. J. Black. Attractive people: Assembling loose?
limbed models using nonparametric belief propagation. In NIPS, 2003.
[11] E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation.
In CVPR, volume 1, pages 605?612, 2003.
[12] M. Isard. PAMPAS: Real?valued graphical models for computer vision. In CVPR, volume 1,
pages 613?620, 2003.
[13] Erik B. Sudderth, M. I. Mandel, W. T. Freeman, and A. S. Willsky. Visual hand tracking
using nonparametric belief propagation. MIT LIDS TR2603, May 2004. Presented at CVPR
Workshop on Generative Model Based Vision, June 2004. http://ssg.mit.edu/nbp/.
[14] M. J. Jones and J. M. Rehg. Statistical color models with application to skin detection. IJCV,
46(1):81?96, 2002.
[15] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free energy approximations and
generalized belief propagation algorithms. Technical Report 2004-040, MERL, May 2004.
[16] A. T. Ihler, E. B. Sudderth, W. T. Freeman, and A. S. Willsky. Efficient multiscale sampling
from products of Gaussian mixtures. In NIPS, 2003.
[17] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall, 1986.
[18] D. Koller, U. Lerner, and D. Angelov. A general algorithm for approximate inference and its
application to hybrid Bayes nets. In UAI 15, pages 324?333, 1999.
| 2748 |@word middle:4 proportionality:1 simulation:4 linearized:1 propagate:1 decomposition:3 covariance:1 thereby:1 initial:1 configuration:23 contains:1 mag:1 recovered:1 discretization:3 current:2 must:2 written:2 shape:1 analytic:3 occludes:1 plot:1 update:6 occlude:1 cue:1 selected:1 generative:2 isard:4 parameterization:2 xk:3 ith:2 esuddert:1 coughlan:1 colored:1 filtered:1 coarse:3 node:8 location:3 five:1 along:2 direct:1 ijcv:1 pairwise:2 mask:3 roughly:1 freeman:5 globally:1 xti:2 provided:1 matched:1 underlying:1 factorized:1 nbp:22 probabilites:1 developed:1 finding:2 transformation:1 temporal:7 every:1 exactly:1 ensured:1 ferreira:1 control:1 unit:3 ramanan:1 before:1 engineering:1 local:14 occluders:1 limit:2 approximately:3 black:2 initialization:3 challenging:1 acknowledgment:1 camera:1 enforces:1 block:2 silverman:1 intersect:2 discretizations:1 matching:1 projection:6 mandel:2 marginalize:1 applying:1 pskin:5 deterministic:1 center:1 pon:3 primitive:1 convex:1 rule:2 estimator:1 rehg:2 stability:1 handle:1 pt:2 user:1 exact:1 associate:1 element:2 recognition:1 particularly:1 labeled:1 observed:1 bottom:2 electrical:1 capture:1 region:2 connected:1 cycle:1 grasping:3 predictable:1 complexity:1 occluded:3 dynamic:5 neglected:2 depend:1 rewrite:1 inapplicable:1 multimodal:2 joint:16 po:2 darpa:1 represented:1 finger:15 articulated:8 separated:1 describe:2 monte:3 outside:2 whose:2 heuristic:1 posed:1 solve:1 modular:1 cvpr:5 valued:1 otherwise:4 statistic:2 final:1 sequence:8 advantage:1 net:1 propose:1 interaction:1 product:8 neighboring:1 combining:1 deformable:1 description:1 christoudias:1 pampas:1 cluster:1 p:2 produce:2 ring:2 object:5 help:3 derive:3 develop:2 depending:1 augmenting:1 finitely:1 eq:6 involves:1 correct:1 filter:6 centered:1 human:4 viewing:1 decompose:1 summation:1 extension:1 unscented:1 practically:1 tracker:8 around:2 hall:1 consecutive:1 estimation:1 proc:2 spreading:1 sensitive:1 create:1 mit:7 clearly:2 quadric:4 gaussian:7 rough:1 factorizes:1 derived:1 focus:2 june:1 properly:3 improvement:1 consistently:1 likelihood:13 contrast:2 centroid:1 detect:1 helpful:1 inference:5 rigid:14 inaccurate:1 typically:1 hidden:4 ical:1 koller:1 pixel:14 overall:2 among:3 orientation:10 augment:2 constrained:1 marginal:11 field:2 equal:6 never:2 once:2 sampling:7 manually:1 chapman:1 yu:1 jones:1 report:1 employ:2 randomly:1 oriented:1 composed:1 lerner:1 tightly:1 individual:1 geometry:2 occlusion:45 william:1 freedom:5 detection:2 cylinder:1 message:24 highly:1 kinematic:21 investigate:1 mixture:5 analyzed:1 pc:9 behind:1 accurate:5 edge:17 partial:1 necessary:2 tree:2 cardboard:2 walk:1 initialized:1 obscured:1 merl:1 earlier:1 modeling:3 loopy:1 cost:1 introducing:1 successful:1 front:1 dependency:2 combined:1 person:1 density:4 stenger:2 probabilistic:2 contract:1 michael:4 connecting:3 siracusa:1 again:1 manage:1 containing:1 huang:2 marginalizes:1 ek:4 derivative:1 steered:1 account:1 potential:9 sec:1 forsyth:1 caused:1 explicitly:3 view:2 mario:1 bayes:1 complicated:2 variance:1 efficiently:1 correspond:1 ssg:2 conceptually:1 weak:3 thumb:3 produced:1 carlo:3 researcher:1 failure:1 energy:1 naturally:1 associated:2 dxi:2 ihler:3 con:1 couple:1 adjusting:1 massachusetts:1 color:8 conversation:1 dimensionality:2 improves:1 sophisticated:1 back:1 response:2 wei:1 formulation:1 furthermore:1 stage:1 correlation:1 hand:51 multiscale:3 reweighting:1 propagation:14 mode:1 quality:1 hypothesized:1 requiring:1 true:4 analytically:1 hence:1 assigned:1 attractive:1 self:4 ambiguous:2 unambiguous:1 generalized:1 plate:1 motion:7 interface:2 reasoning:8 image:13 recently:2 rotation:3 pseudocode:1 functional:1 physical:1 empirically:1 ji:2 exponentially:1 volume:7 assembling:1 approximates:1 relating:1 marginals:1 measurement:1 significant:2 gibbs:1 consistency:2 similarly:1 particle:4 surface:1 base:1 add:1 posterior:4 own:1 alum:1 binary:2 yi:1 integrable:1 captured:1 additional:1 eo:3 determine:2 converge:1 redundant:3 dashed:1 signal:1 full:1 infer:1 alan:1 technical:1 match:1 downweights:1 offer:1 sphere:1 lin:1 qi:4 calculates:1 involving:2 vision:3 histogram:4 iteration:4 represent:1 normalization:1 kernel:2 justified:1 background:1 uninformative:1 sudderth:4 appropriately:1 undirected:1 sent:3 structural:11 ideal:2 xj:33 independence:1 zi:32 marginalization:1 bandwidth:2 identified:1 qj:2 whether:1 expression:1 six:1 linkage:1 suffer:1 cause:1 matlab:1 detailed:1 nonparametric:12 discount:1 locally:1 concentrated:1 permissable:1 http:2 occupy:1 supplied:1 exist:1 zj:3 estimated:6 track:4 per:1 group:1 four:1 demonstrating:2 drawn:1 prevent:1 imaging:1 graph:13 cone:1 limbed:1 angle:9 uncertainty:1 wu:3 patch:1 draw:1 capturing:1 ki:1 followed:1 refine:2 constraint:26 precisely:1 constrain:1 bp:9 scene:1 ri:4 encodes:1 department:1 palm:3 structured:1 ball:2 across:1 partitioned:1 lid:1 making:1 projecting:1 restricted:1 pr:2 intuitively:1 iccv:3 computationally:1 monocular:1 visualization:1 equation:3 mutually:1 describing:3 discus:2 kinematics:1 loose:1 needed:2 letting:1 available:1 gaussians:2 rewritten:1 permit:1 yedidia:1 enforce:2 appropriate:1 alternative:1 robustness:1 original:1 top:1 straints:1 ensure:1 graphical:9 neglect:1 exploit:1 build:2 implied:3 skin:3 mim:1 costly:1 dependence:2 traditional:1 distance:1 link:1 separate:1 thank:1 maccormick:1 willsky:5 assuming:3 erik:2 kalman:1 relationship:2 ellipsoid:1 rotational:3 providing:1 equivalently:1 implementation:2 zt:1 proper:1 twenty:1 perform:1 vertical:1 observation:6 markov:3 poff:3 truncated:1 situation:1 extended:2 excluding:1 precise:1 frame:4 arbitrary:1 pair:6 specified:1 nbchd030010:1 nip:2 address:1 able:1 suggested:1 dynamical:1 below:1 articulation:1 sigelman:1 built:1 including:2 video:4 belief:14 suitable:1 event:1 difficulty:1 natural:1 hybrid:1 indicator:1 mn:3 representing:1 technology:1 axis:1 created:1 coupled:1 prior:5 geometric:6 understanding:1 revolute:2 relative:2 billf:1 filtering:1 sixteen:1 degree:5 consistent:1 principle:1 sigal:3 eccv:2 prone:1 placed:2 supported:1 free:1 allow:3 institute:1 neighbor:3 taking:1 distributed:7 boundary:2 calculated:1 valid:1 avoids:1 contour:1 preventing:1 forward:3 author:1 refinement:1 projected:4 replicated:1 collection:1 counted:1 approximate:6 implicitly:2 silhouette:6 global:1 unoccluded:1 incoming:3 uai:1 xt1:1 conclude:1 xi:73 continuous:2 decomposes:1 reality:1 kanade:1 robust:1 composing:1 angelov:1 complex:2 constructing:2 pk:4 body:16 x1:1 fig:8 fashion:3 x16:1 aid:1 structurally:1 position:8 explicit:1 exponential:1 candidate:2 lie:1 pe:4 minute:1 xt:10 showing:2 evidence:2 normalizing:1 intractable:3 workshop:2 thayananthan:1 albeit:1 importance:1 suited:1 entropy:1 intersection:3 explore:1 likely:1 visual:4 tracking:27 cipolla:2 hua:1 mij:2 conditional:1 labelled:1 leonid:1 determined:3 typical:1 uniformly:1 torr:1 sampler:1 miss:1 catastrophic:1 e:3 occluding:1 people:4 quaternion:2 alexander:1 outgoing:1 avoiding:2 |
1,925 | 2,749 | Brain Inspired Reinforcement Learning
Fran?ois Rivest*
Yoshua Bengio
D?partement d?informatique et de recherche op?rationnelle
Universit? de Montr?al
CP 6128 succ. Centre Ville, Montr?al, QC H3C 3J7, Canada
[email protected]
[email protected]
John Kalaska
D?partement de physiologie
Universit? de Montr?al
[email protected]
Abstract
Successful application of reinforcement learning algorithms often
involves considerable hand-crafting of the necessary non-linear
features to reduce the complexity of the value functions and hence
to promote convergence of the algorithm. In contrast, the human
brain readily and autonomously finds the complex features when
provided with sufficient training. Recent work in machine learning
and neurophysiology has demonstrated the role of the basal ganglia
and the frontal cortex in mammalian reinforcement learning. This
paper develops and explores new reinforcement learning
algorithms inspired by neurological evidence that provides
potential new approaches to the feature construction problem. The
algorithms are compared and evaluated on the Acrobot task.
1
Introduction
Reinforcement learning algorithms often face the problem of finding useful complex
non-linear features [1]. Reinforcement learning with non-linear function
approximators like backpropagation networks attempt to address this problem, but
in many cases have been demonstrated to be non-convergent [2]. The major
challenge faced by these algorithms is that they must learn a value function instead
of learning the policy, motivating an interest in algorithms directly modifying the
policy [3].
In parallel, recent work in neurophysiology shows that the basal ganglia can be
modeled by an actor-critic version of temporal difference (TD) learning [4][5][6], a
well-known reinforcement learning algorithm. However, the basal ganglia do not,
by themselves, solve the problem of finding complex features. But the frontal
cortex, which is known to play an important role in planning and decision-making,
is tightly linked with the basal ganglia. The nature or their interaction is still poorly
understood, and is generating a growing interest in neurophysiology.
*
URL: http://www.iro.umontreal.ca/~rivestfr
This paper presents new algorithms based on current neurophysiological evidence
about brain functional organization. It tries to devise biologically plausible
algorithms that may help overcome existing difficulties in machine reinforcement
learning. The algorithms are tested and compared on the Acrobot task. They are also
compared to TD using standard backpropagation as function approximator.
2
Biological Background
The mammalian brain has multiple learning subsystems. Major learning components
include the neocortex, the hippocampal formation (explicit memory storage system),
the cerebellum (adaptive control system) and the basal ganglia (reinforcement
learning, also known as instrumental conditioning).
The cortex can be argued to be equipotent, meaning that, given the same input, any
region can learn to perform the same computation. Nevertheless, the frontal lobe
differs by receiving a particularly prominent innervation of a specific type of
neurotransmitter, namely dopamine. The large frontal lobe in primates, and
especially in humans, distinguishes them from lower mammals. Other regions of the
cortex have been modeled using unsupervised learning methods such as ICA [7], but
models of learning in the frontal cortex are only beginning to emerge.
The frontal dopaminergic input arises in a part of the basal ganglia called ventral
tegmental area (VTA) and the substantia nigra (SN). The signal generated by
dopaminergic (DA) neurons resembles the effective reinforcement signal of
temporal difference (TD) learning algorithms [5][8]. Another important part of the
basal ganglia is the striatum. This structure is made of two parts, the matriosome
and the striosome. Both receive input from the cortex (mostly frontal) and from the
DA neurons, but the striosome projects principally to DA neurons in VTA and SN.
The striosome is hypothesized to act as a reward predictor, allowing the DA signal
to compute the difference between the expected and received reward. The
matriosome projects back to the frontal lobe (for example, to the motor cortex). Its
hypothesized role is therefore in action selection [4][5][6].
Although there have been several attempts to model the interactions between the
frontal cortex and basal ganglia, little work has been done on learning in the frontal
cortex. In [9], an adaptive learning system based on the cerebellum and the basal
ganglia is proposed. In [10], a reinforcement learning model of the hippocampus is
presented. In this paper, we do not attempt to model neurophysiological data per se,
but rather to develop, from current neurophysiological knowledge, new and efficient
biologically plausible reinforcement learning algorithms.
3
The Model
All models developed here follow the architecture depicted in Figure 1. The first
layer (I) is the input layer, where activation represents the current state. The second
layer, the hidden layer (H), is responsible for finding the non-linear features
necessary to solve the task. Learning in this layer will vary from model to model.
Both the input and the hidden layer feed the parallel actor-critic layers (A and V)
which are the computational analogs of the striatal matriosome and striosome,
respectively. They represent a linear actor-critic implementation of TD.
The neurological literature reports an uplink from V and the reward to DA neurons
which sends back the effective reinforcement signal e (dashed lines) to A, V and H.
The A action units usually feed into the motor cortex, which controls muscle
activation. Here, A?s are considered to represent the possible actions. The basal
ganglia receive input mainly from the frontal cortex and the dopaminergic signal
(e). They also receive some input from parietal cortex (which, as opposed to the
frontal cortex, does not receive DA input, and hence, may be unsupervised). H will
represent frontal cortex when given e and non-frontal cortex when not. The weights
W, v and U correspond to weights into the layers A, V and H respectively (e is not
weighted).
reward
A
D
Striatum
V
e
W
v
(Frontal)
Cortex
H
U
Sensory input
I
Figure 1: Architecture of the models.
Let xt be the vector of the input layer activations based on the state of the
environment at time t. Let f be the sigmoidal activation function of hidden units in
H. Then yt = [f(u1xt ), ?,f(unxt )] T, the vector of activations of the hidden layer at
time t, and where ui is a row of the weight matrix U. Let zt = [xtT ytT] T be the state
description formed by the layers I and H at time t.
3.1
Actor-critic
The actor-critic model of the basal ganglia developed here is derived from [4]. It is
very similar to the basal ganglia model in [5] which has been used to simulate
neurophysiological data recorded while monkeys were learning a task [6]. All units
are linear weighted sums of activity from the previous layers. The actor units
behave under a winner-take-all rule. The winner?s activity settles to 1, and the
others to 0. The initial weights are all equal and non-negative in order to obtain an
initial optimist policy. Beginning with an overestimate of the expected reward leads
every action to be negatively corrected, one after the other until the best one
remains. This usually favors exploration.
Then V(zt ) = vTzt. Let bt = Wzt be the vector of activation of the actor layer before
the winner take all processing. Let at = argmax(bt,i ) be the winning action index at
time t, and let the vector ct be the activation of the layer A after the winner take all
processing such that ct,a = 1 if a = at, 0 otherwise.
3.1.1
Formal description
TD learns a function V of the state that should converge to the expected total
discounted reward. In order to do so, it updates V such that
V ( zt ?1 ) ? E [rt + ?V ( zt )]
where rt is the reward at time t and ? the discount factor. A simple way to achieve
that is to transform the problem into an optimization problem where the goal is to
minimize:
E = [V ( z t ?1 ) ? rt ? ?V ( z t )]
2
It is also useful at this point, to introduce the TD effective reinforcement signal,
equivalent to the dopaminergic signal [5]:
e t = rt + ?V ( z t ) ? V (z t ?1 )
Thus:
2
E = et .
A learning rule for the weights v of V can then be devised by finding the gradient of
E with respect to the weights v. Here, V is the weighted sum of the activity of I and
H. Thus, the gradient is given by
?E
= 2e t [?z t ? z t ?1 ]
?v
Adding a learning rate and negating the gradient for minimization gives the update:
?v = ?e t [z t ?1 ? ?z t ]
Developing a learning rule for the actor units and their weights W using a cost
function is a bit more complex. One approach is to use the tri-hebbian rule
?W = ?e t c t ?1 z t ?1
T
Remark that only the row vector of weights of the winning action is modified.
This rule was first introduced, but not simulated, in [4]. It associates the error e to
the last selected action. If the reward is higher than expected (e > 0), than the action
units activated by the previous state should be reinforced. Conversely, if it is less
than expected (e < 0), than the winning actor unit activity should be reduced for that
state. This is exactly what this tri-hebbian rule does.
3.1.2
Biological justification
[4] presented the first description of an actor-critic architecture based on data from
the basal ganglia that resemble the one here. The major difference is that the V
update rule did not use the complete gradient information.
A similar version was also developed in [5], but with little mathematical
justification for the update rule. The model presented here is simpler and the critic
update rule is basically the same, but justified neurologically. Our model also has a
more realistic actor update rule consistent with neurological knowledge of plasticity
in the corticostriatal synapses [11] (H to V weights). The main purpose of the model
presented in [5] was to simulate dopaminergic activity for which V is the most
important factor, and in this respect, it was very successful [6].
3.2
Hidden Layer
Because the reinforcement learning layer is linear, the hidden layer must learn the
necessary non-linearity to solve the task. The rules below are attempts at
neurologically plausible learning rules for the cortex, assuming it has no clear
supervision signal other than the DA signal for the frontal cortex. All hidden units
weight vectors are initialized randomly and scaled to norm 1 after each update.
? Fixed random
This is the baseline model to which the other algorithms will be compared. The
hidden layer is composed of randomly generated hidden units that are not trained.
? ICA
In [7], the visual cortex was modeled by an ICA learning rule. If the non-frontal
cortex is equipotent, then any region of the cortex could be successfully modeled
using such a generic rule. The idea of combining unsupervised learning with
reinforcement learning has already proven useful [1], but the unsupervised features
were trained prior to the reinforcement training. On the other hand, [12] has shown
that different systems of this sort could learn concurrently. Here, the ICA rule from
[13] will be used as the hidden layer. This means that the hidden units are learning
to reproduce the independent source signals at the origin of the observed mixed
signal.
? Adaptive ICA (e-ICA)
If H represents the frontal cortex, then an interesting variation of ICA is to multiply
its update term by the DA signal e. The size of e may act as an adaptive learning
rate whose source is the reinforcement learning system critic. Also, if the reward is
less than expected (e < 0), the features learned by the ICA unit may be more
counterproductive than helpful, and e pushes the learning away from those features.
? e-gradient method
Another possible approach is to base the update rule on the derivative of the
objective function E applied to the hidden layer weights U, but constraining the
update rule to only use information available locally. Let f? be the derivative of f,
then the gradient of E with respect to U is approximated by:
?E
= 2et [?vi f ?(ui xt )xt ? vi f ?(ui xt ?1 )xt ?1 ]
?ui
Negating the gradient for minimization, adding a learning rate and removing the
non-local weight information, gives the weight update rule:
?ui = ?et [ f ?(ui xt ?1 )xt ?1 ? ?f ?(ui xt )xt ]
Using the value of the weights v would lead to a rule that use non-local information.
The cortex is unlikely to have this and might consider all the weights in v to be
equal to some constant.
To avoid neurons all moving in the same direction uniformly, we encourage the
units on the hidden layer to minimize their covariance. This can be achieved by
adding an inhibitory neuron. Let qt be the average activity of the hidden units at
time t, i.e., the inhibitory neuron activity. Let qt be the moving exponential average
of qt. Since
Var[qt ] =
1
n2
? cov(y
t ,i
(
, yt , j ) ? TimeAverage (qt ? qt )
2
)
i, j
and ignoring the f?s non-linearity , the gradient of the Var[qt] with respect to the
weights U is approximated by:
?Var [q t ]
= 2(q t ? q t )x t
?u i
Combined with the previous equation, this results in a new update rule:
?ui = ?et [ f ?(ui xt ?1 )xt ?1 ? ?f ?(ui xt )xt ] + ? [qt ? qt ]xt
When allowing the discount factor to be different on the hidden layer, we found that
? = 0 gave much better results (e-gradient(0)).
4
S i m u l a t i ons & R e s u l t s
All models of section 3 were run on the Acrobot task [8]. This task consists of a
two-link pendulum with torque on the middle joint. The goal is to bring the tip of
the second pole in a totally upright position.
4.1
The task: Acrobot
The input was coded using 12 equidistant radial basis functions for each angle and
13 equidistant radial basis functions for each angular velocity, for a total of 50 nonnegative inputs. This somewhat simulates the input from joint-angle receptors. A
reward of 1 was given only when the final state was reached (in all other case, the
reward of an action was 0). Only 3 actions were available (3 actor units), either -1, 0
or 1 unit of torque. The details can be found in [8].
50 networks with different random initialization where run for all models for 100
episodes (an episode is the sequence of steps the network performs to achieve the
goal from the start position). Episodes were limited to 10000 steps. A number of
learning rate values were tried for each model (actor-critic layer learning rate, and
hidden layer learning rate). The selected parameters were the ones for which the
average number of steps per episode plus its standard deviation was the lowest. All
hidden layer models got a learning rate of 0.1.
4.2
Results
Figure 2 displays the learning curves of every model evaluated. Three variables
were compared: overall learning performance (in number of steps to success per
episode), final performance (number of steps on the last episode), and early learning
performance (number of steps for the first episode).
Averaged Learning Curves
Average Number of Steps Per Episode
2500
1000
Baseline
2250
900
ICA
800
2000
e-ICA
700
Steps per Episode
1750
e-Gradient(0)
Steps
1500
1250
1000
600
500
400
300
200
750
100
500
0
Baseline
250
e-Gradient
e-ICA
ICA
e-Gradient(0)
Hidden Layer
0
1
5
9
13
17
21
25
29
33
37
41
45
49
53
57
61
65
69
73
77
81
85
89
93
97
Episodes
Figure 3: Average number of steps per
Figure 2: Learning curves of the models. episode with 95% confidence interval.
4.2.1
Space under the learning curve
Figure 3 shows the average steps per episode for each model in decreasing order.
All models needed fewer steps on average than baseline (which has no training at
the hidden layer). In order to assess the performance of the models, an ANOVA
analysis of the average number of steps per episode over the 100 episodes was
performed. Scheff? post-hoc analysis revealed that the performance of every model
was significantly different from every other, except for e-gradient and e-ICA (which
are not significantly different from each other).
4.2.2
Final performance
ANOVA analysis was also used to determine the final performance of the models,
by comparing the number of steps on the last episode. Scheff? test results showed
that all but e-ICA are significantly better than the baseline. Figure 4 shows the
results on the last episode in increasing order. The curved lines on top show the
homogeneous subsets.
Number of Steps on the Last Episode
Number of Steps on the First Episode
800
3000
700
2500
Steps per Episode
Steps per Episode
600
500
400
300
2000
1500
1000
200
500
100
0
0
e-Gradient(0)
ICA
e-Gradient
e-ICA
Hidden Layer
Baseline
e-Gradient(0)
e-ICA
e-Gradient
Baseline
ICA
Hidden Layer
Figure 4: Number of steps on the last Figure 5: Number of steps on the first
episode with 95% confidence interval.
episode with 95% confidence interval.
4.2.3
Early learning
Figure 2 shows that the models also differed in their initial learning. To assess how
different those curves are, an ANOVA was run on the number of steps on the very
first episode. Under this measure, e-gradient(0) and e-ICA were significantly faster
than the baseline and ICA was significantly slower (Figure 5).
It makes sense for ICA to be slower at the beginning, since it first has to stabilize
for the RL system to be able to learn from its input. Until the ICA has stabilized, the
RL system has moving inputs, and hence cannot learn effectively. Interestingly,
e-ICA was protected against this effect, having a start-up significantly faster than
the baseline. This implies that the e signal could control the ICA learning to move
synergistically with the reinforcement learning system.
4.3
External comparison
Acrobot was also run using standard backpropagation with TD and ?-Greedy policy.
In this setup, a neural network of 50 inputs, 50 hidden sigmoidal units, and 1 linear
output was used as function approximator for V. The network had cross-connections
and its weights were initialized as in section 3 such that both architectures closely
matched in terms of power. In this method, the RHS of the TD equation is used as a
constant target value for the LHS. A single gradient was applied to minimize the
squared error after the result of each action. Although not different from the
baseline on the first episode, it was significantly worst on overall and final
performance, unable to constantly improve. This is a common problem when using
backprop networks in RL without handcrafting the necessary complex features. We
also tried SARSA (using one network per action), but results were worst than TD.
The best result we found in the literature on the exact same task are from [8]. They
used SARSA(?) with a linear combination of tiles. Tile coding discretized the input
space into small hyper-cubes and few overlapping tilings were used. From available
reports, their first trial could be slower than e-gradient(0) but they could reach better
final performance after more than 100 episodes with a final average of 75 steps
(after 500 episodes). On the other hand, their function had about 75000 weights
while all our models used 2900 weights.
5
D i s c u s s i on
In this paper we explored a new family of biologically plausible reinforcement
learning algorithms inspired by models of the basal ganglia and the cortex. They use
a linear actor-critic model of the basal ganglia and were extended with a variety of
unsupervised and partially supervised learning algorithms inspired by brain
structures. The results showed that pure unsupervised learning was slowing down
learning and that a simple quasi-local rule at the hidden layer greatly improved
performance. Results also demonstrated the advantage of such a simple system over
the use of function approximators such as backpropagation. Empirical results
indicate a strong potential for some of the combinations presented here. It remains
to test them on further tasks, and to compare them to more reinforcement learning
algorithms. Possible loops from the actor units to the hidden layer are also to be
considered.
Acknowledgments
This research was supported by a New Emerging Team grant to John Kalaska and
Yoshua Bengio from the CIHR. We thank Doina Precup for helpful discussions.
References
[1] Foster, D. & Dayan, P. (2002) Structure in the space of value functions. Machine Learning
49(2):325-346.
[2] Tsitsiklis, J.N. & Van Roy, B. (1996) Featured-based methods for large scale dynamic
programming. Machine Learning 22:59-94.
[3] Sutton, R.S., McAllester, D., Singh, S. & Mansour, Y. (2000) Policy gradient methods for
reinforcement learning with function approximation. Advances in Neural Information Processing
Systems 12, pp. 1057-1063. MIT Press.
[4] Barto A.G. (1995) Adaptive critics and the basal ganglia. In Models of Information Processing in
the Basal Ganglia, pp.215-232. Cambridge, MA: MIT Press.
[5] Suri, R.E. & Schultz, W. (1999) A neural network model with dopamine-like reinforcement signal
that learns a spatial delayed response task. Neuroscience 91(3):871-890.
[6] Suri, R.E. & Schultz, W. (2001) Temporal difference model reproduces anticipatory neural activity.
Neural Computation 13:841-862.
[7] Doi, E., Inui, T., Lee, T.-W., Wachtler, T. & Sejnowski, T.J. (2003) Spatiochromatic receptive field
properties derived from information-theoritic analysis of cone mosaic responses to natural scenes.
Neural Computation 15:397-417.
[8] Sutton R.S. & Barto A.G. (1998) Reinforcement Learning: An Introduction. Cambridge, MA: MIT
Press.
[9] Doya K. (1999) What are the computations of the cerebellum, the basal ganglia and the cerebral
cortex? Neural Networks 12:961-974.
[10] Foster, D.J., Morris, R.G.M., & Dayan, P. (2000) A model of hippocampally dependent navigation,
using the temporal difference learning rule. Hippocampus 10:1-16.
[11] Wickens, J. & K?tter, R. (1995) Cellular models of reinforcement. In Models of Information
Processing in the Basal Ganglia, pp.187-214. Cambridge, MA: MIT Press.
[12] Whiteson, S. & Stone, P. (2003) Concurrent layered learning. In Proceedings of the 2 nd
Internaltional Joint Conference on Autonomous Agents & Multi-agent Systems.
[13] Amari, S-I (1999) Natural gradient learning for over- and under-complete bases in ICA. Neural
Computatio n 11:1875-1883.
| 2749 |@word neurophysiology:3 trial:1 version:2 middle:1 instrumental:1 hippocampus:2 norm:1 nd:1 tried:2 lobe:3 covariance:1 mammal:1 initial:3 synergistically:1 interestingly:1 existing:1 current:3 comparing:1 activation:7 must:2 readily:1 john:2 realistic:1 plasticity:1 motor:2 update:12 greedy:1 selected:2 fewer:1 slowing:1 beginning:3 recherche:1 provides:1 sigmoidal:2 simpler:1 mathematical:1 consists:1 introduce:1 expected:6 ica:25 themselves:1 planning:1 growing:1 multi:1 brain:5 discretized:1 torque:2 inspired:4 discounted:1 decreasing:1 td:9 little:2 innervation:1 totally:1 provided:1 project:2 rivest:2 linearity:2 increasing:1 matched:1 lowest:1 what:2 monkey:1 emerging:1 developed:3 finding:4 temporal:4 every:4 act:2 exactly:1 universit:2 scaled:1 control:3 unit:17 grant:1 overestimate:1 before:1 understood:1 local:3 striatum:2 receptor:1 sutton:2 might:1 physiologie:1 plus:1 initialization:1 resembles:1 physio:1 conversely:1 limited:1 averaged:1 acknowledgment:1 responsible:1 differs:1 backpropagation:4 substantia:1 area:1 featured:1 empirical:1 got:1 significantly:7 confidence:3 radial:2 cannot:1 subsystem:1 selection:1 layered:1 storage:1 www:1 equivalent:1 demonstrated:3 yt:2 qc:1 pure:1 rule:22 variation:1 justification:2 autonomous:1 mcgill:1 construction:1 play:1 target:1 exact:1 programming:1 homogeneous:1 mosaic:1 origin:1 associate:1 velocity:1 roy:1 approximated:2 particularly:1 mammalian:2 observed:1 role:3 worst:2 region:3 episode:26 autonomously:1 environment:1 complexity:1 ui:10 reward:11 dynamic:1 trained:2 singh:1 negatively:1 basis:2 joint:3 succ:1 neurotransmitter:1 informatique:1 effective:3 sejnowski:1 doi:1 formation:1 hyper:1 whose:1 solve:3 plausible:4 otherwise:1 amari:1 favor:1 cov:1 transform:1 h3c:1 final:7 hoc:1 sequence:1 advantage:1 interaction:2 cihr:1 combining:1 loop:1 poorly:1 achieve:2 description:3 convergence:1 francois:1 generating:1 help:1 develop:1 qt:9 op:1 received:1 strong:1 ois:1 involves:1 resemble:1 implies:1 indicate:1 direction:1 closely:1 modifying:1 exploration:1 human:2 mcallester:1 settle:1 backprop:1 argued:1 timeaverage:1 biological:2 sarsa:2 considered:2 major:3 ventral:1 vary:1 early:2 purpose:1 wachtler:1 concurrent:1 successfully:1 weighted:3 minimization:2 mit:4 concurrently:1 j7:1 tegmental:1 modified:1 rather:1 avoid:1 barto:2 derived:2 mainly:1 greatly:1 contrast:1 baseline:10 sense:1 helpful:2 dayan:2 dependent:1 bt:2 unlikely:1 hidden:24 quasi:1 reproduce:1 overall:2 spatial:1 cube:1 equal:2 field:1 having:1 vta:2 represents:2 unsupervised:6 promote:1 hippocampally:1 yoshua:2 partement:2 develops:1 report:2 others:1 distinguishes:1 few:1 randomly:2 composed:1 tightly:1 delayed:1 argmax:1 attempt:4 montr:3 interest:2 organization:1 multiply:1 navigation:1 activated:1 encourage:1 necessary:4 lh:1 initialized:2 negating:2 cost:1 pole:1 deviation:1 subset:1 predictor:1 successful:2 wickens:1 motivating:1 rationnelle:1 combined:1 explores:1 lee:1 receiving:1 tip:1 precup:1 squared:1 recorded:1 opposed:1 tile:2 external:1 derivative:2 potential:2 de:4 coding:1 stabilize:1 doina:1 vi:2 performed:1 try:1 linked:1 pendulum:1 reached:1 start:2 sort:1 parallel:2 minimize:3 formed:1 ass:2 reinforced:1 correspond:1 counterproductive:1 basically:1 synapsis:1 reach:1 against:1 pp:3 knowledge:2 back:2 feed:2 higher:1 supervised:1 follow:1 response:2 improved:1 anticipatory:1 evaluated:2 done:1 angular:1 until:2 hand:3 overlapping:1 effect:1 hypothesized:2 hence:3 cerebellum:3 hippocampal:1 prominent:1 stone:1 complete:2 performs:1 cp:1 bring:1 meaning:1 suri:2 umontreal:3 common:1 functional:1 neurologically:2 rl:3 conditioning:1 winner:4 cerebral:1 analog:1 cambridge:3 centre:1 had:2 moving:3 actor:15 cortex:25 supervision:1 base:2 recent:2 showed:2 inui:1 success:1 approximators:2 devise:1 muscle:1 somewhat:1 nigra:1 converge:1 determine:1 signal:14 dashed:1 multiple:1 hebbian:2 faster:2 cross:1 kalaska:2 devised:1 post:1 coded:1 dopamine:2 represent:3 achieved:1 receive:4 background:1 justified:1 interval:3 source:2 sends:1 tri:2 simulates:1 constraining:1 bengio:2 revealed:1 variety:1 gave:1 equidistant:2 architecture:4 reduce:1 idea:1 url:1 action:12 remark:1 useful:3 se:1 clear:1 discount:2 neocortex:1 locally:1 morris:1 reduced:1 http:1 inhibitory:2 stabilized:1 neuroscience:1 per:11 basal:19 nevertheless:1 anova:3 ville:1 sum:2 cone:1 run:4 angle:2 family:1 doya:1 fran:1 decision:1 bit:1 layer:31 ct:2 convergent:1 display:1 nonnegative:1 activity:8 scene:1 simulate:2 dopaminergic:5 developing:1 combination:2 spatiochromatic:1 making:1 biologically:3 primate:1 principally:1 handcrafting:1 equation:2 remains:2 needed:1 tiling:1 available:3 away:1 generic:1 slower:3 top:1 include:1 especially:1 crafting:1 corticostriatal:1 objective:1 already:1 move:1 receptive:1 rt:4 gradient:22 link:1 unable:1 simulated:1 thank:1 mail:1 cellular:1 iro:2 assuming:1 tter:1 modeled:4 index:1 setup:1 mostly:1 striatal:1 negative:1 implementation:1 zt:4 policy:5 perform:1 allowing:2 neuron:7 theoritic:1 behave:1 parietal:1 curved:1 extended:1 team:1 mansour:1 canada:1 introduced:1 namely:1 connection:1 learned:1 ytt:1 address:1 able:1 usually:2 below:1 challenge:1 memory:1 power:1 difficulty:1 natural:2 improve:1 sn:2 faced:1 prior:1 literature:2 xtt:1 mixed:1 interesting:1 proven:1 approximator:2 var:3 agent:2 sufficient:1 consistent:1 foster:2 critic:11 row:2 supported:1 last:6 tsitsiklis:1 formal:1 face:1 emerge:1 van:1 overcome:1 curve:5 sensory:1 made:1 reinforcement:25 adaptive:5 wzt:1 schultz:2 scheff:2 ons:1 reproduces:1 protected:1 learn:6 nature:1 ca:4 ignoring:1 whiteson:1 complex:5 da:8 did:1 main:1 rh:1 n2:1 differed:1 position:2 explicit:1 winning:3 exponential:1 bengioy:1 learns:2 removing:1 down:1 specific:1 xt:14 explored:1 evidence:2 adding:3 effectively:1 acrobot:5 push:1 depicted:1 ganglion:19 neurophysiological:4 visual:1 neurological:3 partially:1 constantly:1 ma:3 goal:3 considerable:1 upright:1 except:1 corrected:1 uniformly:1 called:1 total:2 arises:1 frontal:18 tested:1 |
1,926 | 275 | 630
Morgan and Bourfard
Generalization and Parameter Estimation
in Feedforward Nets:
Some Experiments
~. Morgan t
H. Bourlard t
International Computer Science Institute
Berkeley, CA 94704, USA
*
*Philips Research Laboratory Brussels
B-1170 Brussels, Belgium
ABSTRACT
We have done an empirical study of the relation of the number of
parameters (weights) in a feedforward net to generalization performance. Two experiments are reported. In one, we use simulated data
sets with well-controlled parameters, such as the signal-to-noise ratio
of continuous-valued data. In the second, we train the network on
vector-quantized mel cepstra from real speech samples. In each case,
we use back-propagation to train the feedforward net to discriminate in
a multiple class pattern classification problem. We report the results of
these studies, and show the application of cross-validation techniques
to prevent overfitting.
1 INTRODUCTION
It is well known that system models which have too many parameters (with respect
to the number of measurements) do not generalize well to new measurements. For
instance, an autoregressive (AR) model can be derived which will represent the training
data with no error by using as many parameters as there are data points. This would
Generalization and Parameter Estimation in Feedforward Nets
generally be of no value, as it would only represent the training data. Criteria such as the
Akaike Information Criterion (AIC) [Akaike, 1974, 1986] can be used to penalize both
the complexity of AR models and their training error variance. In feedforward nets, we
do not currently have such a measure. In fact, given the aim of building systems which
are biologically plausible, there is a temptation to assume the usefulness of indefinitely
large adaptive networks. In contrast to our best guess at Nature's tricks, man-made systems for pattern recognition seem to require nasty amounts of data for training. In short,
the design of massively parallel systems is limited by the number of parameters that can
be learned with available training data. It is likely that the only way truly massive systems can be built is with the help of prior information, e.g., connection topology and
weights that need not be learned [Feldman et al, 1988].
Learning theory [Valiant, V.N., 1984; Pearl, J., 1978] has begun to establish what
is possible for trained systems. Order-of-magnitude lower bounds have been established
for the number of required measurements to train a desired size feedforward net
[Baum&Haussler, 1988]. Rules of thumb suggesting the number of samples required for
specific distributions could be useful for practical problems. Widrow has suggested having a training sample size that is 10 times the number of weights in a network ("Uncle
Bernie's Rule")[Widrow, 1987]. We have begun an empirical study of the relation of the
number of parameters in a feedforward net (e.g. hidden units, connections, feature
dimension) to generalization performance for data sets with known discrimination complexity and signal-to-noise ratio. In the experiment reported here, we are using simulated
data sets with controlled parameters, such as the number of clusters of continuous-valued
data. In a related practical example, we have trained a feedforward network on vectorquantized mel cepstra from real speech samples. In each case, we are using the backpropagation algorithm [Rumelhart et al, 1986] to train the feedforward net to discriminate
in a multiple class pattern classification problem. Our results confirm that estimating
more parameters than there are training samples can degrade generalization. However,
the peak in generalization performance (for the difficult pattern recognition problems
tested here) can be quite broad if the networks are not trained too long, suggesting that
previous guidelines for network size may have been conservative. Furthermore, crossvalidation techniques, which have also proved quite useful for autoregressive model
order determination, appear to improve generalization when used as a stopping criterion
for iteration, and thus preventing overtraining.
2 RANDOM VECTOR PROBLEM
2.1
METHODS
Studies based on synthesized data sets will generally show behavior that is different from that seen with a real data set. Nonetheless, such studies are useful because of
the ease with which variables of interest may be altered. In this case, the object was to
manufacture a difficult pauern recognition problem with statistically regular variability
between the training and test sets. This is actually no easy trick; if the problem is too
easy, then even very small nets will be sufficient, and we would not be modeling the
631
632
Morgan and Bourlard
problem of doing hard pattern classification with small amounts of training data. If the
problem is too hard. then variations in perfonnance will be lost in the statistical variations inherent to methods like back-propagation. which use random initial weight values.
Random points in a 4-dimensional hyperrectangle (drawn from a uniform probability distribution) are classified arbitrarily into one of 16 classes. This group of points will
be referred to as a cluster. This process is repeated for 1-4 nonoverlapping hyperrectangles. A total of 64 points are chosen. 4 for each class. All points are then randomly perturbed with noise of uniform density and range specified by a desired signal-to-noise
ratio (SNR). The noise is added twice to create 2 data sets. one to be used for training.
and the other for test. Intuitively, one might expect that 16-64 hidden units would be
required to transform the training space for classification by the output layer. However.
the variation between training and test and the relatively small amount of data (256
numbers) suggest that for large numbers of parameters (over 256) there should be a
significant degrading of generalization. Another issue was how performance in such a
situation would vary over large numbers of iterations.
Simulations were run on this data using multi-layer perceptrons(MLP) (Le .? layered
feedforward networks) with 4 continuous-valued inputs. 16 outputs. and a hidden layer of
sizes ranging from 4 to 128. Nets were run for signal-to-noise ratios of 1.0 and 2.0. where
the SNR is defined as the ratio of the range of the original cluster points to the range of
the added random values. Error back-propagation without momentum was used. with an
adaptation constant of .25 . For each case. the 64 training patterns were used 10,000
times. and the resulting network was tested on the second data set every 100 iterations so
that generalization could be observed during the learning. Blocks of ten scores were
averaged to stabilize the generalization estimate. After this smoothing, the standard deviation of error (using the normal approximation to the binomial distribution) was roughly
1%. Therefore. differences of 3% in generalization performance are significant at a level
of .001 . All computation was performed on Sun4-110's using code written in Cat ICS!.
Roughly a trillion floating point operations were required for the study.
2.2
RESULTS
Table I shows the test performance for a single cluster and a signal-to-noise ratio
of 1.0 . The chart shows the variation over a range of iterations and network size
(specified both as #hidden units. and as ratio of #weights to #measurements. or "weight
ratio"). Note that the percentages can have finer gradation than 1/64, due to the averaging. and that the performance on the training set is given in parentheses. Test performance is best for this case for 8 hidden units (24.7%). or a weight ratio of .62 (after 2000
iterations). and for 16 units (21.9%). or a weight ratio of 1.25 (after 10000 iterations). For
larger networks. the performance degrades, presumably because of the added noise. At
2000 iterations. the degradation is statistically significant. even in going from 8 to 16 hidden units. There is further degradation out to the 128-unit case. The surprising thing is
that. while this degradation is quite noticeable, it is quite graceful considering the orderof magnitude range in net sizes. An even stronger effect is the loss of generalization
power when the larger nets are more fully trained. All of the nets generalized better when
Generalization and Parameter Estimation in Feedforward Nets
they were trained to a relatively poor degree, especially the larger ones.
Table I - Test (and training) scores: 1 cluster, SNR = 1.0
Hhidden
units
4
8
16
32
64
128
#Weis.hts
Hinputs
.31
.62
1.25
2.50
5.0
10.0
%Test (Train) Correct after N Iterations
1000
10000
2000
5000
9.2(4.4)
11.4(5.2)
13.6(6.9)
12.8(6.4)
13.6(7.7)
11.6(6.7)
21.7(15.6)
24.7(17.0)
21.1(18.4)
18.4(18.3)
18.3(20.8)
17.7(19.1)
12.0(25.9)
20.6(29.8)
18.3(37.2)
17.8(41.7)
19.7(34.4)
12.2(34.7)
15.6(34.4)
21.4(63.9)
21.9(73.4)
13.0(80.8)
18.0(79.2)
15.6(75.6)
Table II shows the results for the same I-cluster problem, but with higher SNR
data (2.0 ). In this case, a higher level of test performance was reached, and it was
reached for a larger net with more iterations (40.8% for 64 hidden units after 5000 iterations). At this point in the iterations, no real degradation was seen for up to 10 times the
number of weights as data samples. However, some signs of performance loss for the
largest nets was evident after 10000 iterations. Note that after 5000 iterations, the networks were only half-trained (roughly 50% error on the training set). When they were
80-90% trained, the larger nets lost considerable ground. For instance, the 10 x net (128
hidden units) lost performance from 40.5% to 28.1 % during these iterations. It appears
that the higher signal-to-noise of this example permitted performance gains for even
higher overparametrization factors, but that the result was even more sensitive to training
for too many iterations.
Table II - Test (and training) scores: 1 cluster, SNR = 2.0
Hhidden
units
4
8
16
32
64
128
#Weights
Hinputs
.31
.62
1.25
2.50
5.0
10.0
%Test (Train) Correct after N Iterations
10000
1000
2000
5000
18.1(8.4)
22.5(12.8)
22.0(11.6)
25.6(13.3)
26.4(13.9)
26.9(12.0)
25.6(29.1)
31.1(34.7)
33.4(32.8)
33.4(35.2)
36.1(35.0)
34.5134.5)
32.2(29.8)
34.5(44.5)
33.6(57.2)
39.4(51.1)
40.8(45.2)
40.5(47.2)
26.9(29.2)
33.3(62.2)
29.4(78.3)
34.2(87.0)
33.6(86.9)
28.1(91.1)
633
634
Morgan and Bourlard
Table III shows the perfonnance for a 4-cluster case. with SNR = 1.0. Small nets are
omitted here, because earlier experiments showed this problem to be too hard. The best
performance (21.1 %) is for one of the larger nets at 2000 iterations. so that the degradation effect is not clearly visible for the undertrained case. At 10000 iterations, however,
the larger nets do poorly.
Table III - Test (and training) scores: 4 cluster, SNR = 1.0
#hidden
%Test (Train) Correct after N Iterations
units
#Weights
#inputs
1000
2000
5000
10000
32
64
96
128
2.50
5.0
7.5
10.
13.8(12.7)
13.6(12.7)
15.3(13.0)
15.2(13.1)
18.3(23.6)
18.4(23.6)
21.1(24.7)
19.1(23.8)
15.8(38.8)
14.7(42.7)
15.9(45.5)
17.5(40.5)
9.4(71.4)
18.8(71.6)
16.3(78.1)
10.5(70.9)
Figure 1 illustrates this graphically. The "undertrained" case is relatively insensitive to the network size, as well as having the highest raw score.
3 SPEECH RECOGNITION
3.1
METHODS
In an ongoing project at ICSI and Philips, a Gennan language data base consisting
of 100 training and 100 test sentences (both from the same speaker) were used for training of a multi-layer-perceptron (MLP) for recognition of phones at the frame level, as
well as to estimate probabilities for use in the dynamic programming algorithm for a
discrete Hidden Markov Model (HMM) [Bourlard & Wellekens. 1988; Bourlard et aI,
1989]. Vector-quantized mel cepstra were used as binary input to a hidden layer. Multiple frames were used as input to provide context to the network. While the size of the
output layer was kept fixed at 50 units, corresponding to the 50 phonemes to be recognized, the hidden layer was varied from 20 to 200 units, and the input context was kept
fixed at 9 frames of speech. As the acoustic vectors were coded on the basis of 132 prototype vectors by a simple binary vector with only one bit 'on', the input field contained
9x132=1188 units, and the total number of possible inputs was thus equal to 1329? There
were 26767 training patterns and 26702 independent test patterns. Of course, this
represented only a very small fraction of the possible inputs, and generalization was thus
potentially difficult Training was done by the classical "error-back propagation" algorithm, starting by minimizing an entropy criterion [Solla et aI, 1988] and then the standard least-mean-square error (LMSE) criterion. In each iteration, the complete training
set was presented, and the parameters were updated after each training pattern.
Generalization and Parameter Estimation in Feedforward Nets
To avoid overtraining of the MLP. (as was later demonstrated by the random vector
experiment described above), improvement on the test set was checked after each iteration. If the classification rate on the test set was decreasing. the adaptation parameter of
the gradient procedure was decreased. otherwise it was kept constanl In another experiment this approach was systematized by splitting the data in three parts: one for the training, one for the test and a third one absolutely independent of the training procedure for
validation. No significant difference was observed between classification rates for the
test and validation data.
Other than the obvious difference with the previous study (this used real data), it is
important to note another significant point: in this case. we stopped iterating (by anyone
particular criterion) when that criterion was leading to no new test set performance
improvemenl While we had not yet done the simulations described above. we had
observed the necessity for such an approach over the course of our speech research. We
expected this to ameliorate the effects of overparameterization.
3.2
RESULTS
Table IV shows the variation in performance for 5. 20. 50. and 200 hidden units.
The peak at 20 hidden units for test set performance. in contrast to the continued
improvement in training set performance. can be clearly seen. However. the effect is certainly a mild one given the wide range in network size; using 10 times the number of
weights as in the "peak" case only causes a degradation of 3.1 %. Note. however, that for
this experiment. the more sophisticated training procedure was used which halted training when generalization started to degrade.
For comparison with classical approaches, results obtained with Maximum Likelihood (ML) and Bayes estimates are also given. In those cases, it is not possible to use
contextual information. because the number of parameters to be learned would be
50 * 1329 for the 9 frames of contexl Therefore. the input field was restricted to a single
frame. The number of parameters for these two last classifiers was then 50 * 132 = 6600.
or a parameter/measurement ratio of .25 . This restriction explains why the Bayes
classifier. which is inherently optimal for a given pattern classification problem. is shown
here as yielding a lower performance than the potentially suboptimal MLP.
Table IV - Test Run: Phoneme Recognition on German data base
hidden units
5
20
50
200
ML
Bayes
#parameters/#training numbers
.23
.93
2.31
9.3
.25
.25
training
62.8
75.7
73.7
86.7
45.9
53.8
test
54.2
62.7
60.6
59.6
44.8
53.0
635
636
Morgan and Bourlard
4 CONCLUSIONS
While both studies show the expected effects of overparameterization, (poor generalization, sensitivity to overtraining in the presence of noise), perhaps the most
significant result is that it was possible to greatly reduce the sensitivity to the choice of
network size by directly observing the network perfonnance on an independent test set
during the course of learning (cross-validation). If iterations are not continued past this
point, fewer measurements are required. This only makes sense because of the interdependence of the learned parameters, particularly for the undertrained case. In any
event, though, it is clear that adding parameters over the number required for discrimination is wasteful of resources. Networks which require many more parameters than there
are measurements will certainly reach lower levels of peak perfonnance than simpler
systems. For at least the examples described here. it is clear that both the size of the
MLP and the degree to which it should be trained are parameters which must be learned
from experimentation with the data set. Further study might. perhaps, yield enough
results to pennit some rule of thumb dependent on properties of the data, but our current
thinking is that these parameters should be detennined dynamically by testing on an
independent test set.
References
Akaike, H. (1974), "A new look at the statistical model identification." IEEE Trans.
autom. Control. AC-lO, 667-674
Akaike. H. (1986), "Use of Statistical Models for Time Series Analysis". Vol. 4, Proc.
IEEE Intl. Conference on Acoustics, Speech, and Signal Processing. Tokyo. 1986.
pp.3147-3155
Baum, E.B., & Haussler. D., (1988), "What Size Net Gives Valid Generalization?",
Neural Computation. In Press
Bourlard. H .? Morgan, N., & Wellekens, Cl., (1989), "Statistical Inference in Multilayer
Perceptrons and Hidden Markov Models. with Applications in Continuous Speech
Recognition", NATO Advanced Research Workshop, Les Arcs. France
Feldman. J.A., Fanty, M.A., and Goddard, N., (1988) "Computing with Structured Neural
Networks", Computer, vol. 21, No.3. pp 91-I()4
PearlJ., (1978). "On the Connection Between the Complexity and Credibility of Inferred
Models". Int. J. General Systems, Vol.4, pp. 155-164
Rumelhart, D.E., Hinton. G.E., & Williams, RJ .? (1986). "Learning internal representations by error propagation" in Parallel Distributed Processing (D.E. Rumelhart & J.L.
McClelland, Eds.). ch. 15. Cambridge. MA: MIT Press
Valiant. L.G., (1984), "A theory of the learnable", Comm. ACM V27. Nll pp1l34-1142
Widrow. B, (1987) "ADALINE and MADALINE" ,Plenary Speech, Vol. I. Proc. IEEE
1st Inti. Conf. on Neural Networks, San Diego, CA. 143-158
Generalization and Parameter Estimation in Feedforward Nets
% correct
ED
25
-
after 10,000 iterations
? - after 2,000 iterations
?
20
?
?
15
e
10
5
# hidden units
32
64
96
128
Figure 1: Sensitivity to net size
637
| 275 |@word mild:1 stronger:1 simulation:2 initial:1 necessity:1 series:1 score:5 past:1 current:1 contextual:1 surprising:1 yet:1 written:1 must:1 visible:1 hts:1 discrimination:2 half:1 fewer:1 guess:1 short:1 indefinitely:1 quantized:2 simpler:1 interdependence:1 expected:2 roughly:3 behavior:1 multi:2 decreasing:1 considering:1 project:1 estimating:1 what:2 degrading:1 berkeley:1 every:1 classifier:2 hyperrectangles:1 unit:19 control:1 appear:1 pauern:1 might:2 twice:1 dynamically:1 ease:1 limited:1 range:6 statistically:2 averaged:1 practical:2 v27:1 testing:1 lost:3 block:1 backpropagation:1 procedure:3 manufacture:1 empirical:2 regular:1 suggest:1 layered:1 context:2 gradation:1 restriction:1 demonstrated:1 baum:2 graphically:1 williams:1 starting:1 splitting:1 rule:3 haussler:2 continued:2 variation:5 updated:1 diego:1 massive:1 programming:1 akaike:4 trick:2 rumelhart:3 recognition:7 particularly:1 observed:3 solla:1 highest:1 icsi:1 comm:1 complexity:3 pp1l34:1 dynamic:1 trained:8 basis:1 cat:1 represented:1 train:7 quite:4 larger:7 valued:3 plausible:1 overparameterization:2 otherwise:1 transform:1 nll:1 net:25 fanty:1 adaptation:2 adaline:1 detennined:1 poorly:1 crossvalidation:1 cluster:9 intl:1 object:1 help:1 widrow:3 ac:1 noticeable:1 correct:4 tokyo:1 explains:1 require:2 generalization:19 ic:1 normal:1 ground:1 presumably:1 hhidden:2 vary:1 belgium:1 omitted:1 estimation:5 proc:2 currently:1 sensitive:1 largest:1 create:1 mit:1 clearly:2 aim:1 temptation:1 avoid:1 derived:1 gennan:1 improvement:2 likelihood:1 greatly:1 contrast:2 sense:1 inference:1 dependent:1 stopping:1 hidden:17 relation:2 going:1 france:1 issue:1 classification:7 smoothing:1 field:2 equal:1 having:2 broad:1 look:1 thinking:1 report:1 inherent:1 randomly:1 floating:1 consisting:1 interest:1 mlp:5 certainly:2 truly:1 yielding:1 perfonnance:4 iv:2 desired:2 stopped:1 instance:2 plenary:1 modeling:1 earlier:1 ar:2 halted:1 deviation:1 snr:7 uniform:2 usefulness:1 too:6 reported:2 perturbed:1 st:1 density:1 international:1 peak:4 sensitivity:3 conf:1 leading:1 suggesting:2 nonoverlapping:1 stabilize:1 int:1 performed:1 later:1 doing:1 observing:1 reached:2 bayes:3 parallel:2 chart:1 square:1 variance:1 phoneme:2 yield:1 generalize:1 raw:1 thumb:2 identification:1 finer:1 classified:1 overtraining:3 reach:1 checked:1 ed:2 nonetheless:1 pp:3 obvious:1 gain:1 proved:1 begun:2 sophisticated:1 actually:1 back:4 appears:1 higher:4 permitted:1 wei:1 done:3 though:1 furthermore:1 undertrained:3 propagation:5 perhaps:2 usa:1 effect:5 building:1 laboratory:1 during:3 speaker:1 mel:3 criterion:7 generalized:1 evident:1 complete:1 ranging:1 insensitive:1 synthesized:1 measurement:7 significant:6 cambridge:1 feldman:2 ai:2 credibility:1 language:1 had:2 base:2 showed:1 phone:1 massively:1 binary:2 arbitrarily:1 morgan:6 seen:3 recognized:1 signal:7 ii:2 multiple:3 rj:1 determination:1 cross:2 long:1 autom:1 coded:1 controlled:2 parenthesis:1 multilayer:1 iteration:24 represent:2 penalize:1 decreased:1 thing:1 seem:1 presence:1 feedforward:13 iii:2 easy:2 enough:1 topology:1 suboptimal:1 reduce:1 prototype:1 speech:8 cause:1 generally:2 iterating:1 useful:3 clear:2 amount:3 ten:1 mcclelland:1 percentage:1 sign:1 discrete:1 vol:4 group:1 drawn:1 wasteful:1 prevent:1 kept:3 fraction:1 run:3 ameliorate:1 pennit:1 bit:1 layer:7 bound:1 aic:1 cepstra:3 anyone:1 graceful:1 sun4:1 relatively:3 structured:1 brussels:2 poor:2 biologically:1 lmse:1 intuitively:1 restricted:1 inti:1 resource:1 wellekens:2 german:1 available:1 operation:1 experimentation:1 original:1 binomial:1 goddard:1 especially:1 establish:1 classical:2 added:3 degrades:1 gradient:1 simulated:2 philip:2 hmm:1 degrade:2 code:1 ratio:11 minimizing:1 madaline:1 difficult:3 potentially:2 design:1 guideline:1 markov:2 arc:1 situation:1 hinton:1 variability:1 nasty:1 frame:5 varied:1 inferred:1 required:6 hyperrectangle:1 specified:2 connection:3 sentence:1 acoustic:2 learned:5 established:1 pearl:1 trans:1 suggested:1 pattern:10 built:1 power:1 event:1 bourlard:7 advanced:1 improve:1 altered:1 started:1 prior:1 loss:2 expect:1 fully:1 validation:4 degree:2 sufficient:1 lo:1 course:3 overparametrization:1 last:1 perceptron:1 institute:1 wide:1 distributed:1 dimension:1 valid:1 autoregressive:2 preventing:1 made:1 adaptive:1 san:1 nato:1 confirm:1 ml:2 overfitting:1 continuous:4 why:1 table:8 nature:1 ca:2 inherently:1 uncle:1 cl:1 hinputs:2 noise:10 repeated:1 referred:1 momentum:1 third:1 specific:1 learnable:1 workshop:1 adding:1 valiant:2 magnitude:2 illustrates:1 entropy:1 likely:1 contained:1 ch:1 trillion:1 ma:1 acm:1 man:1 considerable:1 hard:3 averaging:1 degradation:6 conservative:1 total:2 discriminate:2 perceptrons:2 internal:1 absolutely:1 ongoing:1 tested:2 |
1,927 | 2,750 | Dynamical Synapses Give Rise to a Power-Law
Distribution of Neuronal Avalanches
Anna Levina3,4 , J. Michael Herrmann1,2 , Theo Geisel1,2,4
Bernstein Center for Computational Neuroscience Go? ttingen
Georg-August University G?ottingen, Institute for Nonlinear Dynamics
3
Graduate School Identification in Mathematical Models
4
Max Planck Institute for Dynamics and Self-Organization
Bunsenstr. 10, 37073 G?ottingen, Germany
anna|michael|[email protected]
1
2
Abstract
There is experimental evidence that cortical neurons show avalanche activity with the intensity of firing events being distributed as a power-law.
We present a biologically plausible extension of a neural network which
exhibits a power-law avalanche distribution for a wide range of connectivity parameters.
1
Introduction
Power-law distributions of event sizes have been observed in a number of seemingly diverse systems such as piles of granular matter [8], earthquakes [9], the game of life [1],
friction [7], and sound generated in the lung during breathing. Because it is unlikely that
the specific parameter values at which the critical behavior occurs are assumed by chance,
the question arises as to what mechanisms may tune the parameters towards the critical
state. Furthermore it is known that criticality brings about optimal computational capabilities [10], improves mixing or enhances the sensitivity to unpredictable stimuli [5]. Therefore, it is interesting to search for mechanisms that entail criticality in biological systems,
for example in the nervous tissue.
In [6] a simple model of a fully connected neural network of non-leaky integrate-and-fire
neurons was studied. This study not only presented the first example of a globally coupled system that shows criticality, but also predicted the critical exponent as well as some
extra-critical dynamical phenomena, which were later observed in experimental researches.
Recently, Beggs and Plenz [3] studied the propagation of spontaneous neuronal activity in
slices of rat cortex and neuronal cultures using multi-electrode arrays. Thereby, they found
avalanche-like activity where the avalanche sizes were distributed according to a powerlaw with an exponent of -3/2. This distribution was stable over a long period of time. The
authors suggested that such a distribution is optimal in terms of transmission and storage
of the information.
The network in [6] consisted of a set of N identical threshold elements characterized by
the membrane potential u ? 0 and was driven by a slowly delivered random input. When
the potential exceeds a threshold ? = 1, the neuron spikes and relaxes. All connections
in the network are described by a single parameter ? representing the evoked synaptic
potential which a spiking neuron transmits to the all postsynaptic neurons. The system is
driven by a slowly delivered random input. The simplicity of that model allows analytical
consideration: an explicit formula for probability distribution of avalanche size depending
on the parameter ? was derived. A major drawback of the model was the lack of any
true self-organization. Only at an externally well-tuned critical value of ? = ? cr did the
distribution take a form of a power-law, although with an exponent of precisely -3/2 (in the
limit of a large system). The term critical will be applied here also to finite systems. True
criticality requires a thermodynamic limit N ?? ?, we consider approximate power-law
behavior characterized by an exponent and an error that describes the remaining deviation
from the best-matching exponent. The model in [6] is displayed for comparison in Fig. 3.
In Fig. 1 (a-c) it is visible that the system may also exhibit other types of behavior such
as small avalanches with a finite mean (even in the thermodynamic limit) at ? < ? cr . On
the other hand at ? > ?cr the distribution becomes non-monotonous, which indicates that
avalanches of the size of the system are occurring frequently. Generally speaking, in order
to drive the system towards criticality it therefore suffices to decrease the large avalanches
and to enhance the small ones. Most interestingly, synaptic connections among real neurons
show a similar tendency which thus deserves further study. We will consider the standard
model of a short-term dynamics in synaptic efficacies [11, 13] and thereafter discuss several
numerically determined quantities. Our studies imply that dynamical synapses indeed may
support the criticalization of the neural activity in a small homogeneous neural system.
2
The model
We are considering a network of integrate-and-fire neurons with dynamical synapses. Each
synapse is described by two parameters: amount of available neurotransmitters and a fraction of them which is ready to be used at the next synaptic event. Both parameters change in
time depending on the state of the presynaptic neuron. Such a system keeps a long memory
of the previous events and is known to exert a regulatory effect to the network dynamics,
which will turnout to be beneficial.
Our approach is based on the model of dynamical synapses, which was shown by Tsodyks
and Markram to reliably reproduce the synaptic responses between pyramidal neurons
[11, 13]. Consider a set of N integrate-and-fire neurons characterized by a membrane
potential hi ? 0, and two connectivity parameters for each synapse: Ji,j ? 0, ui,j ? [0, 1].
The parameter Ji,j characterizes the number of available vesicles on the presynaptic side of
the connection from neuron j to neuron i. Each spike leads to the usage of a portion of the
resources of the presynaptic neuron, hence, at the next synaptic event less transmitters will
be available i.e. activity will be depressed. Between spikes vesicles are slowly recovering
on a timescale ?1 . The parameter ui,j denotes the actual fraction of vesicles on the presynaptic side of the connection from neuron j to neuron i, which will be used in the synaptic
transmission. When a spike arrives at the presynaptic side j, it causes an increase of u i,j .
Between spikes, ui,j slowly decrease to zero on a timescale ?2 . The combined effect of
Ji,j and ui,j results in the facilitation or depression of the synapse. The dynamics of a
membrane potential hi consists of the integration of excitatory postsynaptic currents over
all synapses of the neuron and the slowly delivered random input. When the membrane
potential exceeds threshold, the neuron emits a spike and hi resets to a smaller value. The
?=0.52
0
?=0.54
0
10
10
(b)
(a)
?2
Log P(L,N,?)
10
?2
Log P(L,N,?)
10
?4
10
?4
10
?6
10
?1.37
f(L)=L
P(L,N,?)
?1.37
f(L)=L
P(L,N,?)
?6
?8
10
0
1
10
10
10
2
0
1
10
10
Log L
10
2
Log L
10
?=0.74
0
10
(c)
0
?1
Log P(L,N,?)
Log P(L,N,?)
10
10
?2
10
?4
10
?2
10
f(L)=L?1.37
P(L,N,?)
?3
10
0
10
1
10
Log L
2
10
1
0.9
0.8
0.7
? 0.60.5
0.4
0
1
2
10
Log L
10
10
Figure 1: Probability distributions of avalanche sizes P (L, N, ?). (a) in the subcritical,
? = 0.52, (b) the critical, ? = 0.53, and (c) supra-critical regime, ? = 0.74. In (a-c) the
solid lines and symbols denote the numerical results for the avalanche size distributions,
dashed lines show the best matching power-law. Here the curves are temporal averages
over 106 avalanches with N = 100, u0 = 0.1, ?1 = ?2 = 0.1. Sub-figure (d) displays
P (L, N, ?) as a function of L for ? varying from 0.34 to 0.98 with step 0.01. The presented
curves are temporal averages over 106 avalanches with N = 200, u0 = 0.1, ?1 = ?2 = 0.1.
joint dynamics can be written as a system of differential equations
J?i,j
u? i,j
h? i
1
(J0 ? Ji,j ) ? ui,j Ji,j ?(t ? tjsp ),
?1 ?s
1
= ?
ui,j + u0 (1 ? ui,j )?(t ? tjsp ),
?2 ?s
N
X
1
=
?(r(t) ? i)c? +
ui,j Ji,j ?(t ? tjsp )
?s
j=1
=
(1)
(2)
(3)
Here ?(t) is the Dirac delta-function, tjsp is the spiking time of neuron j, J0 is the resting
value of Ji,j , u0 is the minimal value of ui,j , and ?s is a parameter separating time-scales
of random input and synaptic events. In the following study we will use the discrete version
of equations (1-3).
-1
-1.5
?
-2
-2.5
-3
-3.5
-4
0.5
0.55
0.6
0.65
0.7
0.75
?
0.8
0.85
0.9
0.95
1
Figure 2: The best matching power-law exponent. The black line represents the present
model, while the grey stands for model [6]. Average synaptic efficiency ? varies from 0.3
to 1.0 with step 0.001. Presented curves are temporal averages over 107 avalanches with
N = 200, u0 = 0.1, ?1 = ?2 = 10. Note that for a network of 200 units, the absolute
critical exponent is smaller than the large system limit ? = ?1.5 and that the step size has
been drastically reduced in the vicinity of the phase transition.
3
Discrete version of the model
We consider time being measured in discrete steps, t = 0, 1, 2, . . .. Because synaptic values are essentially determined presynaptically, we assume that all synapses of a neuron are
identical, i.e. Jj , uj are used instead of Ji,j and ui,j respectively. The system is initialized
with arbitrary values hi ? [0, 1), i = 1, . . . , N , where the threshold ? is fixed at 1. Depending on the state of the system at time t, the i-th element receives external input I iext (t)
or internal input Iiint (t) from other neural elements. The two effects result in an activation
? at time t + 1,
h
? i (t + 1) = hi (t) + I ext (t) + I int (t)
h
(4)
i
i
? i (t + 1), the membrane potential of the i-th element at time t + 1 is
From the activation h
computed as
?
? i (t + 1) < 1,
h (t + 1)
if h
hi (t + 1) = ? i
(5)
? i (t + 1) ? 1,
hi (t + 1) ? 1 if h
i.e. if the activation exceeds the threshold, it is reset but retains the supra-threshold portion
? i (t + 1) ? 1 of the membrane potential.
h
The external input Iiext (t) is a random amount c ?, received by a randomly chosen neuron.
Here, c is input strength scale, parameter of the model, ? is uniformly distributed on [0, 1]
and independent of i. The external input is considered to be delivered slowly compared
to the internal relaxation dynamics (which corresponds to ?sep 1), i.e. it occurs only if
no element has exceeded the threshold in the previous time step. This corresponds to an
infinite separation of the time scales of external driving and avalanche dynamics discussed
in the literature on self-organized criticality [12, 14]. The present results, however, are
not affected by a continuous external input even during the avalanches. The external input
0.8
0.7
0.6
??
0.5
0.4
0.3
0.2
0.1
0
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
?
Figure 3: The mean squared deviation from the best fit power-law. The grey code and
parameters are the same as in Fig. 2 For the fit, avalanches of a size larger than 1 and
smaller than N/2 have been used. Clearly, an error levels above 0.1 indicates that the
fitted curve is far from being a candidate for a power law. Near to ? = 1, when the nondynamical model develops a supercritical behavior, the range of the power-law is quite
limited. Interesting is again the sharp transition of the dynamical model, which is due to
the facilitation strength surpassing a critical level.
can formally be written as Iiext (t) = c ?r,i (t) |?M (t?1)|, 0 ?, where r is an integer random
variable between 1 and N indicating the chosen element, M (t ? 1) is the set of indices of
? i (t) ? 1}, and ?.. is the
supra-threshold elements in the previous time step i.e. M (t) = {i|h
Kronecker delta. We will consider c = J0 , thus an external input is comparable with the
typical internal input.
The internal input Iiint (t) is given by
Iiint (t) =
X
Jj (t) uj (t).
j?M (t?1)
The system is initialized with ui = u0 , Ji = J0 , where J0 = ?/(N u0 ) and ? is the
connection strength parameter. Similar to the membrane potentials dynamics, we can distinguish two situations: either there were supra-threshold neurons at the previous moment
of time or not.
(
? i (t) < 1,
uj (t) ? ?12 u0 uj (t)) ? ?|M (t)|,0 if h
uj (t + 1) =
(6)
? i (t) ? 1,
uj (t) + (1 ? uj (t))u0 (t)
if h
Jj (t + 1) =
(
Jj (t) + ?11 (J0 ? Jj (t)) ? ?|M (t)|,0
Jj (t)(1 ? uj (t))
? i (t) < 1,
if h
? i (t) ? 1,
if h
(7)
Thus, we have a model with parameters ?, u0 , ?1 , ?2 and N . Our main focus will be on
the influence of ? on the cumulative dynamics of the network. The dependence on N has
been studied in [6], where it was found that the critical parameter of the distribution scales
as ?cr = 1 ? N ?1/2 . In the same way, the exponent will be smaller in modulus than -3/2
for finite systems.
Averaged synaptic efficacy
Deviation from a power?law
0.3
0.9
0.25
0.85
0.15
0.8
?
?i
0.2
0.1
0.75
0.05
0.7
0
0.65
0.053
0.0534
0.0538
?
0.0542
0.0546
0.055
Figure 4: Average synaptic efficacy for the parameter ? varied from 0.53 to 0.55 with step
0.0005 (left axis). Dashed line depicts deviation from a power-law (right axis).
If at time t0 an element receives an external input and fires, then an avalanche starts and
|M (t0 )| = 1. The system is globally coupled, such that during an avalanche all elements
receive internal input including the unstable elements themselves. The avalanche duration
D ? 0 is defined to be the smallest integer for which the stopping condition |M (t 0 +D)| =
PD?1
0 is satisfied. The avalanche size L is given by L = k=0 |M (t0 + k)|. The subject of
our interest is the probability distribution of avalanche size P (L, N, ?) depending on the
parameter ?.
4
Results
Similarly, as in model [6] we considered the avalanche size distribution for different values
of ?, cf. Fig. 1. Three qualitatively different regimes can be distinguished: subcritical,
critical, and supra-critical. For small values of ?, subcritical avalanche-size distributions
are observed. The subcriticality is characterized by the neglible number of avalanches of
a size close to the system size. For ?cr , the system has an avalanche distribution with an
approximate power-law behavior for L, inside a range from 1 almost up to the size of the
system, where the exponential cut-off is observed (Fig. 1b). Above the critical value ? cr ,
avalanche size distributions become non-monotonous (Fig. 1c). Such supra-critical curves
have a minimum at an intermediate avalanche size.
There is the sharp transition from subcritical to critical regime and then a long critical
region, where the distribution of avalanche size stays close to the power-law. For a system
of 200 neurons this transition is shown in Fig. 2. To characterize this effect we used the
least-squares estimate of the closest power-law parameters Cnorm and ?.
p(L, N, ?) ? Cnorm L?
The mean squared deviation from the estimated power-law undergoes a fast change Fig. 3
(bottom) near ?cr = 0.54. At this point the transition from the subcritical to the critical
regime occurs. Then there is a long interval of parameters for which the deviation from
the power-law is about 2%. Also, the parameters of the power-law approximately stay
constant. For different system-sizes different values of ?cr and ? are observed. At large
system sizes ? is close to ?1.5
In order to develop more extensive analysis we considered also a number of additional sta-
10?4
??
0
?2?10?4
?4?10?4
0.2
0.3
0.4
?
0.5
0.6
0.7
0.8
Figure 5: Difference between synaptic efficacy after and before avalanche averaged over
all synapses . Values larger than zero mean facilitation, smaller ones mean depression.
Presented curves are temporal averages over 106 avalanches with N = 100, u0 = 0.1,
?1 = ?2 = 10.
tistical quantities at the beginning and after the avalanche. The average synaptic efficacy
? = h?i i = hJi ui i is determined by taking the average over all neurons participating
in an avalanche. This average shows the mean input, which neurons receive at each step
of avalanche. This characteristic quantity undergoes a sharp transition together with the
avalanches distribution, cf. Fig. 4. The meaning of the quantity ? in the present model
is similar to the coupling strength ?/N in the model discussed in [6]. It is equal to the
average EPSP which all postsynaptic neurons will receive after presynaptic neuron spikes.
The transition from a subcritical to a critical regime happens when ? jumps into the vicinity of ?cr /N of the previous model (for N = 100 and ?cr = 0.9). This points to the
correspondence between the two models.
When ? is large, then the synaptic efficacy is high and, hence, avalanches are large and
intervals between them are small. The depression during the avalanche dominates facilitation and decrease synaptic efficacy and vise versa. When avalanches are small, facilitation
dominates depression. Thus, the synaptic dynamics stabilizes the network to remain near
the critical value for a large interval of parameters ?. In Fig. 4 shown the averaged effect
of an avalanche for different values of parameter ?. For ? > ?cr , depression during the
avalanche is stronger than facilitation and avalanches on average decrease synaptic efficacy.
When ? is very small, the effect of facilitation is washed out during the inter-avalanche period where synaptic parameters return to the resting state. To illustrate this, Fig. 5 shows
the difference, ?? = h?after i ? h?before i, between the average synaptic efficacies after and
before the avalanche depending on the parameter ?. If this difference is larger than zero,
synapses are facilitated by avalanche. If it is smaller than zero, synapses are depressed. For
small values of the parameter ? avalanches lead to facilitation, while, for large values of ?
avalanches depress synapses.
In the limit N ? ?, the synaptic dynamics should be rescaled such that the maximum of
transmitter available at a time t divided by the average avalanche size converges to a value
which scales as 1 ? N ?1/2 . In this way, if the average avalanche size is smaller than critical, synapses will essentially be enhanced, or they will otherwise experience depression.
The necessary parameters for the model (such as the time-scales) have shown to be easily
achievable in the small (although time-consuming) simulations presented here.
5
Conclusion
We presented a simple biologically plausible complement to a model of a non-leaky
integrate-and-fire neurons network which exhibits a power-law avalanche distribution for a
wide range of connectivity parameters. In previous studies [6] we showed, that the simplest
model with only one parameter ?, characterizing synaptic efficacy of all synapses exhibits
subcritical, critical and supra critical regimes with continuous transition from one to another, depending on parameter ?. These main classes are also present here but the region
of critical behavior is immensely enlarged. Both models have a power-law distribution with
an exponent approximately equal to -3/2, although the exponent is somewhat smaller for
small network sizes. For network sizes close to those in the experiments described in [3]
the result is indistinguishable from the limiting value.
References
[1] P. Bak, K. Chen, and M. Creutz. Self-organized criticality in the ?Game of Life.
Nature, 342:780?782, 1989.
[2] P. Bak, C. Tang, and K. Wiesenfeld. Self-organized criticality: an explanation of 1/f
noise. Phys. Rev. Lett., 59:381?384, 1987.
[3] J. Beggs and D. Plenz. Neuronal avalanches in neocortical circuits. J Neurosci,
23:11167?11177, 2003.
[4] J. Beggs and D. Plenz. Neuronal Avalanches Are Diverse and Precise Activity
Patterns That Are Stable for Many Hours in Cortical Slice Cultures. J Neurosci,
24(22):5216-5229, 2004.
[5] R. Der, F. Hesse, R. Liebscher ( Contingent robot behavior from self-referential dynamical systems. Submitted to Autonomous Robots, 2005.
[6] C. W. Eurich, M. Herrmann, and U. Ernst. Finite-size effects of avalanche dynamics.
Phys. Rev. E, 66, 2002.
[7] H. J. S. Feder and J. Feder. Self-organized criticality in a stick-slip process. Phys.
Rev. bibtLett., 66:2669?2672, 1991.
[8] V. Frette, K. Christensen, A. M. Malthe-S?renssen, J. Feder, T. J?ssang, and
P. Meakin. Avalanche dynamics in a pile of rice. Nature, 397:49, 1996.
[9] B. Gutenberg and C. F. Richter. Magnitude and energy of earthquakes. Ann. Geophys.,
9:1, 1956.
[10] R. A. Legenstein, W. Maass. Edge of chaos and prediction of computational power
for neural microcircuit models. Submitted, 2005.
[11] H. Markram and M. Tsodyks. Redistribution of synaptic efficacy between pyramidal
neurons. Nature, 382:807?810, 1996.
[12] D. Sornette, A. Johansen, and I. Dornic. Mapping self-organized criticality onto criticality. J. Phys. I, 5:325?335, 1995.
[13] M. Tsodyks, K. Pawelzik, and H. Markram. Neural networks with dynamic synapses.
Neural Computations, 10:821?835, 1998.
[14] A. Vespignani and S. Zapperi. Order parameter and scaling fields in self-organized
criticality. Phys. Rev. Lett., 78:4793?4796, 1997.
| 2750 |@word version:2 achievable:1 stronger:1 grey:2 simulation:1 thereby:1 solid:1 moment:1 efficacy:11 tuned:1 interestingly:1 current:1 activation:3 written:2 numerical:1 visible:1 nervous:1 beginning:1 meakin:1 short:1 mathematical:1 differential:1 become:1 consists:1 inside:1 inter:1 indeed:1 behavior:7 themselves:1 frequently:1 multi:1 globally:2 actual:1 pawelzik:1 unpredictable:1 considering:1 becomes:1 circuit:1 what:1 temporal:4 stick:1 unit:1 planck:1 before:3 limit:5 ext:1 firing:1 approximately:2 black:1 exert:1 studied:3 evoked:1 limited:1 graduate:1 range:4 averaged:3 earthquake:2 hesse:1 richter:1 j0:6 matching:3 onto:1 close:4 liebscher:1 storage:1 influence:1 center:1 go:1 duration:1 simplicity:1 powerlaw:1 array:1 facilitation:8 autonomous:1 limiting:1 spontaneous:1 enhanced:1 homogeneous:1 slip:1 element:10 cut:1 observed:5 bottom:1 tsodyks:3 region:2 connected:1 decrease:4 rescaled:1 pd:1 ui:12 dynamic:15 vesicle:3 efficiency:1 sep:1 joint:1 easily:1 neurotransmitter:1 fast:1 quite:1 larger:3 plausible:2 otherwise:1 timescale:2 delivered:4 seemingly:1 analytical:1 reset:2 epsp:1 mixing:1 ernst:1 participating:1 dirac:1 electrode:1 transmission:2 supra:7 wiesenfeld:1 converges:1 depending:6 develop:1 coupling:1 illustrate:1 measured:1 school:1 received:1 geisel:1 predicted:1 recovering:1 bak:2 drawback:1 redistribution:1 suffices:1 biological:1 extension:1 immensely:1 considered:3 mapping:1 stabilizes:1 driving:1 major:1 smallest:1 clearly:1 cr:11 varying:1 derived:1 focus:1 transmitter:2 indicates:2 stopping:1 unlikely:1 supercritical:1 reproduce:1 germany:1 among:1 exponent:10 integration:1 equal:2 field:1 identical:2 represents:1 stimulus:1 develops:1 sta:1 randomly:1 phase:1 fire:5 organization:2 interest:1 arrives:1 edge:1 necessary:1 experience:1 culture:2 initialized:2 minimal:1 monotonous:2 fitted:1 retains:1 deserves:1 deviation:6 gutenberg:1 characterize:1 varies:1 combined:1 sensitivity:1 stay:2 off:1 michael:2 enhance:1 together:1 connectivity:3 squared:2 again:1 satisfied:1 slowly:6 external:8 return:1 potential:9 de:1 int:1 matter:1 later:1 vespignani:1 characterizes:1 portion:2 start:1 lung:1 capability:1 avalanche:53 square:1 characteristic:1 identification:1 beggs:3 drive:1 tissue:1 submitted:2 synapsis:13 phys:5 synaptic:23 energy:1 transmits:1 emits:1 improves:1 organized:6 exceeded:1 response:1 synapse:3 microcircuit:1 furthermore:1 hand:1 receives:2 nonlinear:1 propagation:1 lack:1 undergoes:2 brings:1 modulus:1 usage:1 effect:7 consisted:1 true:2 hence:2 vicinity:2 maass:1 indistinguishable:1 game:2 self:9 during:6 rat:1 neocortical:1 meaning:1 chaos:2 consideration:1 recently:1 spiking:2 ji:9 discussed:2 resting:2 numerically:1 surpassing:1 versa:1 similarly:1 depressed:2 ottingen:2 stable:2 entail:1 cortex:1 robot:2 closest:1 showed:1 driven:2 life:2 der:1 minimum:1 additional:1 somewhat:1 contingent:1 period:2 dashed:2 u0:11 thermodynamic:2 sound:1 exceeds:3 characterized:4 long:4 divided:1 prediction:1 essentially:2 ttingen:1 receive:3 interval:3 pyramidal:2 extra:1 subject:1 integer:2 near:3 bernstein:1 intermediate:1 relaxes:1 fit:2 t0:3 depress:1 feder:3 speaking:1 cause:1 jj:6 depression:6 generally:1 tune:1 amount:2 referential:1 simplest:1 reduced:1 neuroscience:1 delta:2 estimated:1 hji:1 diverse:2 discrete:3 affected:1 georg:1 thereafter:1 threshold:9 subcritical:7 relaxation:1 fraction:2 facilitated:1 almost:1 separation:1 legenstein:1 bunsenstr:1 scaling:1 comparable:1 hi:7 distinguish:1 display:1 correspondence:1 activity:6 strength:4 precisely:1 kronecker:1 friction:1 according:1 membrane:7 describes:1 beneficial:1 smaller:8 postsynaptic:3 remain:1 rev:4 biologically:2 happens:1 christensen:1 resource:1 equation:2 discus:1 mechanism:2 available:4 distinguished:1 denotes:1 remaining:1 cf:2 uj:8 presynaptically:1 question:1 quantity:4 occurs:3 spike:7 dependence:1 exhibit:4 enhances:1 separating:1 tistical:1 presynaptic:6 unstable:1 code:1 index:1 rise:1 reliably:1 neuron:28 finite:4 displayed:1 criticality:12 situation:1 precise:1 varied:1 arbitrary:1 august:1 sharp:3 intensity:1 complement:1 extensive:1 connection:5 eurich:1 johansen:1 hour:1 suggested:1 dynamical:7 pattern:1 breathing:1 regime:6 max:1 memory:1 including:1 explanation:1 geophys:1 power:22 event:6 critical:24 representing:1 imply:1 axis:2 ready:1 washed:1 coupled:2 neglible:1 literature:1 law:21 fully:1 interesting:2 granular:1 integrate:4 pile:2 excitatory:1 theo:1 drastically:1 side:3 institute:2 wide:2 taking:1 markram:3 characterizing:1 absolute:1 leaky:2 distributed:3 slice:2 curve:6 lett:2 cortical:2 stand:1 transition:8 cumulative:1 author:1 qualitatively:1 jump:1 herrmann:1 iext:1 far:1 approximate:2 keep:1 gwdg:1 assumed:1 consuming:1 search:1 regulatory:1 continuous:2 nature:3 anna:2 did:1 main:2 neurosci:2 noise:1 neuronal:5 fig:11 enlarged:1 depicts:1 sub:1 explicit:1 exponential:1 candidate:1 tang:1 externally:1 formula:1 specific:1 symbol:1 evidence:1 dominates:2 magnitude:1 occurring:1 chen:1 vise:1 corresponds:2 chance:1 plenz:3 rice:1 ann:1 towards:2 change:2 determined:3 infinite:1 uniformly:1 typical:1 experimental:2 tendency:1 indicating:1 formally:1 internal:5 support:1 arises:1 phenomenon:1 |
1,928 | 2,751 | Learning Shared Latent Structure for Image
Synthesis and Robotic Imitation
Aaron P. Shon ?
Keith Grochow ? Aaron Hertzmann ? Rajesh P. N. Rao ?
?Department of Computer Science and Engineering
University of Washington
Seattle, WA 98195 USA
?Department of Computer Science
University of Toronto
Toronto, ON M5S 3G4 Canada
{aaron,keithg,rao}@cs.washington.edu, [email protected]
Abstract
We propose an algorithm that uses Gaussian process regression to learn
common hidden structure shared between corresponding sets of heterogenous observations. The observation spaces are linked via a single,
reduced-dimensionality latent variable space. We present results from
two datasets demonstrating the algorithms?s ability to synthesize novel
data from learned correspondences. We first show that the method can
learn the nonlinear mapping between corresponding views of objects,
filling in missing data as needed to synthesize novel views. We then
show that the method can learn a mapping between human degrees of
freedom and robotic degrees of freedom for a humanoid robot, allowing
robotic imitation of human poses from motion capture data.
1
Introduction
Finding common structure between two or more concepts lies at the heart of analogical reasoning. Structural commonalities can often be used to interpolate novel data in one space
given observations in another space. For example, predicting a 3D object?s appearance
given corresponding poses of another, related object relies on learning a parameterization
common to both objects. Another domain where finding common structure is crucial is
imitation learning, also called ?learning by watching? [11, 12, 6]. In imitation learning,
one agent, such as a robot, learns to perform a task by observing another agent, for example, a human instructor. In this paper, we propose an efficient framework for discovering
parameterizations shared between multiple observation spaces using Gaussian processes.
Gaussian processes (GPs) are powerful models for classification and regression that subsume numerous classes of function approximators, such as single hidden-layer neural networks and RBF networks [8, 15, 9]. Recently, Lawrence proposed the Gaussian process
latent variable model (GPLVM) [4] as a new technique for nonlinear dimensionality reduction and data visualization [13, 10]. An extension of this model, the scaled GPLVM
(SGPLVM), has been used successfully for dimensionality reduction on human motion
capture data for motion synthesis and visualization [1].
In this paper, we propose a generalization of the GPLVM model that can handle multiple
observation spaces, where each set of observations is parameterized by a different set of
kernel parameters. Observations are linked via a single, reduced-dimensionality latent variable space. Our framework can be viewed as a nonlinear extension to canonical correlation
analysis (CCA), a framework for learning correspondences between sets of observations.
Our goal is to find correspondences on testing data, given a limited set of corresponding
training data from two observation spaces. Such an algorithm can be used in a variety of
applications, such as inferring a novel view of an object given a corresponding view of
a different object and estimating the kinematic parameters for a humanoid robot given a
human pose.
Several properties motivate our use of GPs. First, finding latent representations for correlated, high-dimensional sets of observations requires non-linear mappings, so linear CCA
is not viable. Second, GPs reduce the number of free parameters in the regression model,
such as number of basis units needed, relative to alternative regression models such as
neural networks. Third, the probabilistic nature of GPs facilitates learning from multiple
sources with potentially different variances. Fourth, probabilistic models provide an estimate of uncertainty in classification or interpolating between data; this is especially useful
in applications such as robotic imitation where estimates of uncertainty can be used to decide whether a robot should attempt a particular pose or not. GPs can also generate samples
of novel data, unlike many nonlinear dimensionality reduction methods [10, 13].
Fig. 1(a) shows the graphical model for learning shared structure using Gaussian processes.
A latent space X maps to two (or more) observation spaces Y, Z using nonlinear kernels,
and ?inverse? Gaussian processes map back from observations to latent coordinates. Synthesis employs a map from latent coordinates to observations, while recognition employs
an inverse mapping. We demonstrate our approach on two datasets. The first is an image
dataset containing corresponding views of two different objects. The challenge is to predict
corresponding views of the second object given novel views of the first based on a limited
training set of corresponding object views. The second dataset consists of human poses derived from motion capture data and corresponding kinematic poses from a humanoid robot.
The challenge is to estimate the kinematic parameters for robot pose, given a potentially
novel pose from human motion capture, thereby allowing robotic imitation of human poses.
Our results indicate that the model generalizes well when only limited training correspondences are available, and that the model remains robust when testing data is noisy.
2
Latent Structure Model
The goal of our model is to find a shared latent variable parameterization in a space X that
relates corresponding pairs of observations from two (or more) different spaces Y, Z. The
observation spaces might be very dissimilar, despite the observations sharing a common
structure or parameterization. For example, a robot?s joint space may have very different
degrees of freedom than a human?s joint space, although they may both be made to assume
similar poses. The latent variable space then characterizes the common pose space.
Let Y, Z be matrices of observations (training data) drawn from spaces of dimensionality
DY , DZ respectively. Each row represents one data point. These observations are drawn so
that the first observation y1 corresponds to the observation z1 , observation y2 corresponds
to observation z2 , etc. up to the number of observations N . Let X be a ?latent space? of
dimensionality DX DY , DZ . We initialize a matrix of latent points X by averaging the
top DX principal components of Y, Z. As with the original GPLVM, we optimize over a
limited subset of training points (the active set) to accelerate training, determined by the
informative vector machine (IVM) [5]. The SGPLVM assumes that a diagonal ?scaling
matrix? W scales the variances of each dimension k of the Y matrix (a similar matrix V
scales each dimension m of Z). The scaling matrix helps in domains where different output
dimensions (such as the degrees of freedom of a robot) can have vastly different variances.
We assume that each latent point xi generates a pair of observations yi , zi via a nonlinear
function parameterized by a kernel matrix. GPs parameterize the functions fY : X 7? Y
and fZ : X 7? Z. The SGPLVM model uses an exponential (RBF) kernel, defining the
similarity between two data points x, x0 as:
?
Y
k (x, x0 ) = ?Y exp ? ||x ? x0 ||2 + ?x,x0 ?Y?1
2
(1)
given hyperparameters for the Y space ?Y = {?Y , ?Y , ?Y }. ? represents the delta function. Following standard notation for GPs [8, 15, 9], the priors P (?Y ), P (?Z ), P (X),
the likelihoods P (Y), P (Z) for the Y, Z observation spaces, and the joint likelihood
PGP (X, Y, Z, ?Y , ?Z ) are given by:
!
DY
1X
|W|N
2 T ?1
exp ?
(2)
wk Yk KY Yk
P (Y|?Y , X) = p
2
(2?)N DY |K|DY
k=1
!
DZ
|V|N
1 X
?1
2 T
P (Z|?Z , X) = p
exp ?
(3)
v Z K Zm
2 m=1 m m Z
(2?)N DZ |K|DZ
P (?Y ) ?
1
?Y ?Y ?Y
P (X)
PGP (X, Y, Z, ?Y , ?Z )
1
?Z ?Z ?Z
!
1X
1
2
? exp ?
||xi ||
2 i
2?
P (?Z ) ?
=
= P (Y|?Y , X)P (Z|?Z , X)P (?Y )P (?Z )P (X)
(4)
(5)
(6)
where ?Z , ?Z , ?Z are hyperparameters for the Z space, and wk , vm respectively denote the
?1
diagonal entries for matrices W, V. Let Y, KY respectively denote the Y observations
from the active set (with mean ?Y subtracted out) and the kernel matrix for the active set.
The joint negative log likelihood of a latent point x and observations y, z is:
Ly|x (x, y)
=
DY
||W (y ? fY (x)) ||2
+
ln ?Y2 (x)
2
2?Y (x)
2
T
?1
fY (x)
= ?Y + Y KY k(x)
?Y2 (x)
= k(x, x) ? k(x)T KY k(x)
DZ
||V (z ? fZ (x)) ||2
2
+
=
ln ?Z
(x)
2 (x)
2?Z
2
Lz|x (x, z)
fZ (x)
2
?Z
(x)
?1
T
?1
= ?Z + Z KZ k(x)
T
= k(x, x) ? k(x)
(7)
(8)
(9)
(10)
(11)
?1
KZ k(x)
(12)
1
Lx,y,z = Ly|x + Lz|x + ||x||2
(13)
2
The model learns a separate kernel for each observation space, but a single set of common
latent points. A conjugate gradient solver adjusts model parameters and latent coordinates
to maximize Eq. 6.
Given a trained SGPLVM, we would like to infer the parameters in one observation space
given parameters in the other (e.g., infer robot pose z given human pose y). We solve this
problem in two steps. First, we determine the most likely latent coordinate x given the
X
observation y using argmaxx LX (x, y). In principle, one could find x at ?L
?x = 0 using
gradient descent. However, to speed up recognition, we instead learn a separate ?inverse?
Gaussian process fY?1 : y 7? x that maps back from the space Y to the space X. Once
the correct latent coordinate x has been inferred for a given y, the model uses the trained
SGPLVM to predict the corresponding observation z.
3
Results
We first demonstrate how the our model can be used to synthesize new views of an object, character or scene from known views of another object, character or scene, given a
common latent variable model. For ease of visualization, we used 2D latent spaces for all
results shown here. The model was applied to image pairs depicting corresponding views
of 3D objects. Different views show the objects1 rotated at varying degrees out of the camera plane. We downsampled the images to 32 ? 32 grayscale pixels. For fitting images, the
scaling matrices W, V are of minimal importance (since we expect all pixels should a pri?1
ori have the same variance). We also found empirically that using fY (x) = YT KY k(x)
instead of Eqn. 8 produced better renderings. We rescaled each fY to use the full range of
pixel values [0 . . . 255], creating the images shown in the figures.
Fig. 1(b) shows how the model extrapolates to novel datasets given a limited set of training correspondences. We trained the model using 72 corresponding views of two different
objects, a coffee cup and a toy truck. Fixing the latent coordinates learned during training,
we then selected 8 views of a third object (a toy car). We selected latent points corresponding to those views, and learned kernel parameters for the 8 images. Empirically, priors on
kernel parameters are critical for acceptable performance, particularly when only limited
data are available such as the 8 different poses for the toy car. In this case, we used the
kernel parameters learned for the cup and toy truck (based on 72 different poses) to impose
a Gaussian prior on the kernel parameters for the car (replacing P (?) in Eqn. 4):
T
? log P (?car ) = ? log PGP + (?car ? ?? ) ??1
? (?car ? ?? )
(14)
where ?car , ?? , ??1
? are respectively kernel parameters for the car, the mean kernel parameters for previously learned kernels (for the cup and truck), and inverse covariance matrix
for learned kernel parameters. ?? , ??1
? in this case are derived from only two samples, but
nonetheless successfully constrain the kernel parameters for the car so the model functions
on the limited set of 8 example poses.
To test the model?s robustness to noise and missing data, we randomly selected 10 latent
coordinates corresponding to a subset of learned cup and truck image pairs. We then added
varying displacements to the latent coordinates and synthesized the corresponding novel
views for all 3 observation spaces. Displacements varied from 0 to 0.45 (all 72 latent coordinates lie on the interval [-0.70,-0.87] to [0.72,0.56]). The synthesized views are shown
in Fig. 1(b), with images for the cup and truck in the first two rows. Latent coordinates in
regions of low model likelihood generate images that appear blurry or noisy. More interestingly, despite the small number of images used for the car, the model correctly matches the
orientation of the car to the synthesized images of the cup and truck. Thus, the model can
synthesize reasonable correspondences (given a latent point) even if the number of training
examples used to learn kernel parameters is small.
Fig. 2 illustrates the recognition performance of the ?inverse? Gaussian process model as a
function of the amount of noise added to the inputs. Using the latent space and kernel parameters learned for Fig. 1, we present 72 views of the coffee cup with varying amounts of
additive, zero-mean white noise, and determine the fraction of the 72 poses correctly classified by the model. The model estimates the pose using 1-nearest-neighbor classification
of the latent coordinates x learned during training:
argmax k (x, x0 )
(15)
x0
The recognition performance degrades gracefully with increasing noise power. Fig. 2 also
plots sample images from one pose of the cup at several different noise levels. For two
of the noise levels, we show the ?denoised? cup image selected using the nearest-neighbor
1
http://www1.cs.columbia.edu/CAVE/research/softlib/coil-100.html
a)
b) Displacement from latent coordinate:
X
GPLVM
0 .05 .10 .15 .20 .25 .30 .35 .40 .45
GPLVM
Inverse
GP kernels
Y
Z
Y
...
Z
Novel
Figure 1: Pose synthesis for multiple objects using shared structure: (a) Graphical model for our
shared structure latent variable model. The latent space X maps to two (or more) observation spaces
Y, Z using a nonlinear kernel. ?Inverse? Gaussian process kernels map back from observations to
latent coordinates. (b) The model learns pose correspondences for images of the coffee cup and
toy truck (Y and Z) by fitting kernel parameters and a 2-dimensional latent variable space. After
learning the latent coordinates for the cup and truck, we fit kernel parameters for a novel object (the
toy car). Unlike the cup and truck, where 72 pairs of views were used to fit kernel parameters and
latent coordinates, only 8 views were used to fit kernel parameters for the car. The model is robust
to noise in the latent coordinates; numbers above each column represent the amount of noise added
to the latent coordinates used to synthesize the images. Even at points where the model is uncertain
(indicated by the rightmost results in the Y and Z rows), the learned kernel extrapolates the correct
view of the toy car (the ?novel? row).
classification, and the corresponding reconstructed truck. This illustrates how even noisy
observations in one space can predict corresponding observations in the companion space.
Fig. 3 illustrates the ability of the model to synthesize novel views of one object given
a novel view of a different object. A limited set of corresponding poses (24 of 72 total)
of a cat figurine and a mug were used to train the GP model. The remaining 48 poses
of the mug were then used as testing data. For each snapshot of the mug, we inferred
a latent point using the ?inverse? Gaussian process model and used the learned model to
synthesize what the cat figurine should look like in the same pose. A subset of these results
is presented in the rows on the left in Fig. 3: the ?Test? rows show novel images of the mug,
the ?Inferred? rows show the model?s best estimate for the cat figurine, and the ?Actual?
rows show the ground truth. Although the images for some poses are blurry and the model
fails to synthesize the correct image for pose 44, the model nevertheless manages to capture
fine detail on most of the images.
2
(x) ,
The grayscale plot at upper right in Fig. 3 shows model certainty 1/ ?Y2 (x) + ?Z
with white where the model is highly certain and black where the model is highly uncertain. Arrows indicate the path in latent space formed by the training images. The dashed
line indicates latent points inferred from testing images of the mug. Numbered latent coordinates correspond to the synthesized images at left. The latent space shows structure:
latent points for similar poses are grouped together, and tend to move along a smooth curve
in latent space, with coordinates for the final pose lying close to coordinates for the first
pose (as desired for a cyclic image sequence). The bar graph at lower right compares model
certainty for the numbered latent coordinates; higher bars indicate greater model certainty.
The model appears particularly uncertain for blurry inferred images, such as 8, 14, and 26.
Fig. 4 shows an application of our framework to the problem of robotic imitation of human
actions. We trained our model on a dataset containing human poses (acquired with a Vicon
motion capture system) and corresponding poses of a Fujitsu HOAP-2 humanoid robot.
Note that the robot has 25 degrees-of-freedom which differ significantly from the degrees-
Fraction correct
1
0.8
0.6
0.4
0.2
0
0
20
40
60
80
100
Noise power (? of noise distribution)
Figure 2: Recognition using a Learned Latent Variable Space: After learning from 72 paired
correspondences between poses of a coffee cup and of a toy truck, the model is able to recognize different poses of the coffee cup in the presence of additive white noise. Fraction of images recognized
are plotted on the Y axis and standard deviation of white noise is plotted on the X axis. One pose
of the cup (of 72 total) is plotted for various noise levels (see text for details). ?Denoised? images
obtained from nearest-neighbor classification and the corresponding images for the Z space (the toy
truck) are also shown.
of-freedom of the human skeleton used in motion capture. After training on 43 roughly
matching poses (only linear time scaling applied to align training poses), we tested the
model by presenting a set of 123 human motion capture poses (which includes the original
training set). Because the recognition model fY?1 : y 7? x is not trained from samples from
the prior distribution of the data, P (x, y), we found it necessary to approximate k (x) for
the recognition model by rescaling k (x) for the testing points to lie on the same interval as
the k (x) values of the training points. We suspect that providing proper samples from the
prior will improve recognition performance. As illustrated in Fig. 4 (inset panels, human
and robot skeletons), the model was able to correctly infer appropriate robot kinematic
parameters given a range of novel human poses. These inferred parameters were used in
conjunction with a simple controller to instantiate the pose in the humanoid robot (see
photos in the inset panels).
4
Discussion
Our Gaussian process model provides a novel method for learning nonlinear relationships
between corresponding sets of data. Our results demonstrate the model?s utility for diverse
tasks such as image synthesis and robotic programming by demonstration. The GP model
is closely related to other kernel methods for solving CCA [3] and similar problems [2].
The problems addressed by our model can also be framed as a type of nonlinear CCA. Our
method differs from the latent variable method proposed in [14] by using Gaussian process
regression. Disadvantages of our method with respect to [14] include lack of global optimality for the latent embedding; advantages include fewer independent parameters and the
ability to easily impose priors on the latent variable space (since GPLVM regression uses
conjugate gradient optimization instead of eigendecomposition). Empirically we found the
flexiblity of the GPLVM approach desirable for modeling a diversity of data sources.
Our framework learns mappings between each observation space and a latent space, rather
than mapping directly between the observation spaces. This makes visualization and interaction much easier. An intermediate mapping to a latent space is also more economical in
2
8
14
20
26
32
?1.5
Test
0.037
2
?1
68
Inferred
?0.5
0.036
8
0.035
0
Actual
32
38
14
0.5
38
44
50
56
62
0.034
44
20
26
68
62
1
0.033
56
Test
?0.5
50
0
0.5
1
0.35
Inferred
Actual
0.29
2
8 14 20 26 32 38 44 50 56 62 68
Figure 3: Synthesis of novel views using a shared latent variable model: After training on 24
paired images of a mug with a cat figurine (out of 72 total paired images), we ask the model to infer
what the remaining 48 poses of the cat would look like given 48 novel views of the mug. The system
uses an inverse Gaussian process model to infer a 2D latent point for each of the 48 novel mug views,
then synthesizes a corresponding view of the cat figurine. At left we plot the novel testing mug
images given to the system (?test?), the synthesized cat images (?inferred?), and the actual views of
the cat figurine from the database (?actual?). At upper right we plot the model uncertainty in the
latent space. The 24 latent coordinates from the training data are plotted as arrows, while the 48
novel latent points are plotted as crosses on a dashed line. At lower right we show model certainty for
2
(x)) for each testing latent point x. Note the low certainty for the blurry
the cat figurine data (1/?Z
inferred images labeled 8, 14, and 26.
the limit of many correlated observation spaces. Rather than learning all pairwise relations
between observation spaces (requiring a number of parameters quadratic in the number of
observation spaces), our method learns one generative and one inverse mapping between
each observation space and the latent space (so the number of parameters grows linearly).
From a cognitive science perspective, such an approach is similar to the Active Intermodal
Mapping (AIM) hypothesis of imitation [6]. In AIM, an imitating agent maps its own
actions and its perceptions of others? actions into a single, modality-independent space.
This modality-independent space is analogous to the latent variable space in our model.
Our model does not directly address the ?correspondence problem? in imitation [7], where
correspondences between an agent and a teacher are established through some form of unsupervised feature matching. However, it is reasonable to assume that imitation by a robot
of human activity could involve some initial, explicit correspondence matching based on
simultaneity. Turn-taking behavior is an integral part of human-human interaction. Thus,
to bootstrap its database of corresponding data points, a robot could invite a human to take
turns playing out motor sequences. Initially, the human would imitate the robot?s actions
and the robot could use this data to learn correspondences using our GP model; later, the
robot could check and if necessary, refine its learned model by attempting to imitate the
human?s actions.
Acknowledgements: This work was supported by NSF AICS grant no. 130705 and an ONR YIP
award/NSF Career award to RPNR. We thank the anonymous reviewers for their comments.
References
[1] K. Grochow, S. L. Martin, A. Hertzmann, and Z. Popovi?c. Style-based inverse kinematics. In
Proc. SIGGRAPH, 2004.
[2] J. Ham, D. Lee, and L. Saul. Semisupervised alignment of manifolds. In AISTATS, 2004.
[3] P. L. Lai and C. Fyfe. Kernel and nonlinear canonical correlation analysis. Int. J. Neural Sys.,
10(5):365?377, 2000.
Figure 4: Learning shared latent structure for robotic imitation of human actions: The plot in
2
for the robot model
the center shows the latent training points (red circles) and model precision 1/?Z
(grayscale plot), with examples of recovered latent points for testing data (blue diamonds). Model
precision is qualitatively similar for the human model. Inset panels show the pose of the human
motion capture skeleton, the simulated robot skeleton, and the humanoid robot for each example
latent point. The model correctly infers robot poses from the human walking data (inset panels).
[4] N. D. Lawrence. Gaussian process models for visualization of high dimensional data. In
S. Thrun, L. Saul, and B. Sch?olkopf, editors, Advances in NIPS 16.
[5] N. D. Lawrence, M. Seeger, and R. Herbrich. Fast sparse Gaussian process methods: the
informative vector machine. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in
NIPS 15, 2003.
[6] A. N. Meltzoff. Elements of a developmental theory of imitation. In A. N. Meltzoff and
W. Prinz, editors, The imitative mind: Development, evolution, and brain bases, pages 19?41.
Cambridge: Cambridge University Press, 2002.
[7] C. Nehaniv and K. Dautenhahn. The correspondence problem. In Imitation in Animals and
Artifacts. MIT Press, 2002.
[8] A. O?Hagan. On curve fitting and optimal design for regression. Journal of the Royal Statistical
Society B, 40:1?42, 1978.
[9] C. E. Rasmussen. Evaluation of Gaussian Processes and other Methods for Non-Linear Regression. PhD thesis, University of Toronto, 1996.
[10] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323?2326, 2000.
[11] S. Schaal, A. Ijspeert, and A. Billard. Computational approaches to motor learning by imitation.
Phil. Trans. Royal Soc. London: Series B, 358:537?547, 2003.
[12] A. P. Shon, D. B. Grimes, C. L. Baker, and R. P. N. Rao. A probabilistic framework for modelbased imitation learning. In Proc. 26th Ann. Mtg. Cog. Sci. Soc., 2004.
[13] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290(5500):2319?2323, 2000.
[14] J. J. Verbeek, S. T. Roweis, and N. Vlassis. Non-linear CCA and PCA by alignment of local
models. In Advances in NIPS 16, pages 297?304. 2003.
[15] C. K. I. Williams. Computing with infinite networks. In M. C. Mozer, M. I. Jordan, and
T. Petsche, editors, Advances in NIPS 9. Cambridge, MA: MIT Press, 1996.
| 2751 |@word sgplvm:5 flexiblity:1 covariance:1 thereby:1 reduction:5 initial:1 cyclic:1 series:1 interestingly:1 rightmost:1 recovered:1 z2:1 dx:2 additive:2 informative:2 motor:2 plot:6 generative:1 discovering:1 selected:4 instantiate:1 parameterization:3 fewer:1 plane:1 imitate:2 sys:1 provides:1 parameterizations:1 toronto:4 lx:2 herbrich:1 along:1 viable:1 consists:1 fitting:3 g4:1 acquired:1 pairwise:1 x0:6 behavior:1 roughly:1 brain:1 actual:5 solver:1 increasing:1 estimating:1 notation:1 baker:1 panel:4 what:2 grochow:2 finding:3 cave:1 certainty:5 scaled:1 unit:1 ly:2 grant:1 appear:1 engineering:1 local:1 limit:1 despite:2 path:1 might:1 black:1 ease:1 limited:8 range:2 camera:1 testing:8 differs:1 bootstrap:1 displacement:3 significantly:1 matching:3 instructor:1 downsampled:1 numbered:2 close:1 optimize:1 map:7 reviewer:1 missing:2 dz:6 yt:1 center:1 phil:1 williams:1 adjusts:1 embedding:2 handle:1 intermodal:1 coordinate:22 analogous:1 programming:1 gps:7 us:5 hypothesis:1 synthesize:8 element:1 recognition:8 particularly:2 walking:1 hagan:1 database:2 labeled:1 capture:9 parameterize:1 region:1 rescaled:1 yk:2 mozer:1 ham:1 developmental:1 hertzmann:2 skeleton:4 motivate:1 trained:5 solving:1 basis:1 accelerate:1 joint:4 easily:1 siggraph:1 cat:9 various:1 train:1 fast:1 london:1 fyfe:1 solve:1 ability:3 gp:4 noisy:3 final:1 sequence:2 advantage:1 propose:3 interaction:2 zm:1 roweis:2 analogical:1 ky:5 olkopf:1 seattle:1 rotated:1 object:18 help:1 fixing:1 pose:42 nearest:3 keith:1 eq:1 soc:2 c:2 indicate:3 differ:1 closely:1 correct:4 meltzoff:2 human:26 generalization:1 anonymous:1 imitative:1 extension:2 lying:1 ground:1 exp:4 lawrence:3 mapping:9 predict:3 commonality:1 proc:2 grouped:1 successfully:2 mit:2 gaussian:17 aim:2 rather:2 dgp:1 varying:3 conjunction:1 derived:2 schaal:1 likelihood:4 indicates:1 check:1 seeger:1 initially:1 hidden:2 relation:1 pixel:3 classification:5 orientation:1 html:1 development:1 animal:1 yip:1 initialize:1 once:1 washington:2 represents:2 look:2 unsupervised:1 filling:1 others:1 employ:2 randomly:1 recognize:1 interpolate:1 argmax:1 attempt:1 freedom:6 highly:2 kinematic:4 evaluation:1 alignment:2 grime:1 rajesh:1 integral:1 necessary:2 hertzman:1 desired:1 plotted:5 circle:1 minimal:1 uncertain:3 column:1 modeling:1 rao:3 disadvantage:1 hoap:1 deviation:1 subset:3 entry:1 teacher:1 vicon:1 probabilistic:3 vm:1 lee:1 modelbased:1 synthesis:6 together:1 vastly:1 thesis:1 containing:2 watching:1 cognitive:1 creating:1 style:1 rescaling:1 toy:9 diversity:1 de:1 wk:2 includes:1 int:1 later:1 view:28 ori:1 linked:2 characterizes:1 observing:1 red:1 denoised:2 formed:1 variance:4 correspond:1 produced:1 manages:1 economical:1 m5s:1 classified:1 sharing:1 nonetheless:1 dataset:3 ask:1 car:14 dimensionality:9 infers:1 back:3 appears:1 higher:1 popovi:1 correlation:2 langford:1 eqn:2 replacing:1 invite:1 nonlinear:12 lack:1 artifact:1 indicated:1 grows:1 semisupervised:1 usa:1 concept:1 y2:4 requiring:1 evolution:1 pri:1 illustrated:1 white:4 mug:9 during:2 presenting:1 demonstrate:3 motion:9 silva:1 reasoning:1 image:34 novel:22 recently:1 common:8 empirically:3 synthesized:5 cup:15 cambridge:3 framed:1 robot:23 similarity:1 etc:1 align:1 base:1 own:1 perspective:1 certain:1 onr:1 approximators:1 yi:1 greater:1 impose:2 recognized:1 determine:2 maximize:1 dashed:2 relates:1 multiple:4 full:1 desirable:1 infer:5 smooth:1 match:1 cross:1 lai:1 award:2 paired:3 verbeek:1 regression:8 controller:1 kernel:27 represent:1 fine:1 interval:2 addressed:1 source:2 crucial:1 modality:2 sch:1 unlike:2 comment:1 suspect:1 tend:1 facilitates:1 jordan:1 structural:1 presence:1 intermediate:1 rendering:1 variety:1 fit:3 zi:1 reduce:1 whether:1 pca:1 utility:1 becker:1 action:6 useful:1 involve:1 amount:3 locally:1 softlib:1 tenenbaum:1 reduced:2 generate:2 fz:3 http:1 canonical:2 nsf:2 simultaneity:1 delta:1 correctly:4 blue:1 diverse:1 demonstrating:1 nevertheless:1 drawn:2 graph:1 fraction:3 inverse:11 parameterized:2 powerful:1 fourth:1 uncertainty:3 reasonable:2 decide:1 dy:6 scaling:4 acceptable:1 layer:1 cca:5 correspondence:13 quadratic:1 truck:12 refine:1 activity:1 mtg:1 extrapolates:2 constrain:1 scene:2 generates:1 speed:1 optimality:1 attempting:1 martin:1 department:2 conjugate:2 character:2 www1:1 imitating:1 heart:1 ln:2 visualization:5 remains:1 previously:1 turn:2 kinematics:1 needed:2 mind:1 photo:1 generalizes:1 available:2 appropriate:1 petsche:1 blurry:4 subtracted:1 alternative:1 robustness:1 original:2 top:1 assumes:1 remaining:2 include:2 graphical:2 especially:1 coffee:5 society:1 move:1 added:3 degrades:1 diagonal:2 obermayer:1 gradient:3 separate:2 thank:1 simulated:1 thrun:2 sci:1 gracefully:1 manifold:1 fy:7 relationship:1 providing:1 demonstration:1 potentially:2 negative:1 design:1 proper:1 perform:1 allowing:2 upper:2 diamond:1 observation:42 snapshot:1 datasets:3 billard:1 dautenhahn:1 descent:1 gplvm:8 subsume:1 defining:1 vlassis:1 y1:1 varied:1 canada:1 inferred:10 pair:5 z1:1 learned:13 established:1 heterogenous:1 nip:4 prinz:1 address:1 able:2 bar:2 trans:1 perception:1 challenge:2 royal:2 power:2 critical:1 predicting:1 improve:1 numerous:1 axis:2 columbia:1 text:1 prior:6 geometric:1 acknowledgement:1 relative:1 expect:1 eigendecomposition:1 humanoid:6 degree:7 agent:4 principle:1 editor:4 playing:1 fujitsu:1 row:8 supported:1 free:1 rasmussen:1 neighbor:3 saul:3 taking:1 nehaniv:1 sparse:1 curve:2 dimension:3 kz:2 made:1 qualitatively:1 lz:2 reconstructed:1 approximate:1 global:2 robotic:8 active:4 xi:2 imitation:15 grayscale:3 latent:65 learn:6 nature:1 robust:2 career:1 depicting:1 synthesizes:1 argmaxx:1 interpolating:1 domain:2 aistats:1 linearly:1 arrow:2 noise:13 hyperparameters:2 fig:11 precision:2 fails:1 inferring:1 explicit:1 exponential:1 lie:3 third:2 learns:5 pgp:3 companion:1 cog:1 inset:4 importance:1 phd:1 illustrates:3 easier:1 appearance:1 likely:1 shon:2 corresponds:2 ivm:1 truth:1 relies:1 ma:1 coil:1 viewed:1 goal:2 ann:1 rbf:2 shared:9 determined:1 infinite:1 averaging:1 principal:1 called:1 total:3 ijspeert:1 aaron:3 dissimilar:1 tested:1 correlated:2 |
1,929 | 2,752 | Norepinephrine and Neural Interrupts
Peter Dayan
Gatsby Computational Neuroscience Unit
University College London
17 Queen Square, London WC1N 3AR, UK
[email protected]
Angela J. Yu
Center for Brain, Mind & Behavior
Green Hall, Princeton University
Princeton, NJ 08540, USA
[email protected]
Abstract
Experimental data indicate that norepinephrine is critically involved in
aspects of vigilance and attention. Previously, we considered the function of this neuromodulatory system on a time scale of minutes and
longer, and suggested that it signals global uncertainty arising from
gross changes in environmental contingencies. However, norepinephrine
is also known to be activated phasically by familiar stimuli in welllearned tasks. Here, we extend our uncertainty-based treatment of norepinephrine to this phasic mode, proposing that it is involved in the detection and reaction to state uncertainty within a task. This role of norepinephrine can be understood through the metaphor of neural interrupts.
1
Introduction
Theoretical approaches to understanding neuromodulatory systems are plagued by the latter?s neural ubiquity, evolutionary longevity, and temporal promiscuity. Neuromodulators
act in potentially different ways over many different time-scales [14]. There are various
general notions about their roles, such as regulating sleeping and waking [13] and changing the signal to noise ratios of cortical neurons [11]. However, these are slowly giving
way to more specific computational ideas [20, 7, 10, 24, 25, 5], based on such notions as
optimal gain scheduling, prediction error and uncertainty.
In this paper, we focus on the short term activity of norepinephrine (NE) neurons in the
locus coeruleus [18, 1, 2, 3, 16, 4]. These neurons project NE to subcortical structures and
throughout the entire cortex, with individual neurons having massive axonal arborizations
[12]. Over medium and short time-scales, norepinephrine is implicated in various ways in
attention, vigilance, and learning. Given the widespread distribution and effects of NE in
key cognitive tasks, it is very important to understand what it is in a task that drives the
activity of NE neurons, and thus what computational effects it may be exerting.
Figure 1 illustrates some of the key data that has motivated theoretical treatments of NE.
Figure 1A;B;C show more tonic responses operating around a time-scale of minutes. Figures 1D;E;F show the short-term effects that are our main focus here.
Briefly, Figures 1A;B show that when the rules of a task are reversed, NE influences the
speed of adaptation to the changed contingency (Figure 1A) and the activity of noradrenergic cells is tonically elevated (Figure 1B). Based on these data, we suggested [24, 25] that
medium-term NE reports unexpected uncertainty arising from unpredicted changes in an
environment or task. This signal is a key part of a strategy for inference in potentially labile
contexts. It operates in collaboration with a putatively cholinergic signal which reports on
expected uncertainty that arises, for instance, from known variability or noise.
80
B
C
Spikes/sec
100
Idazoxan
Saline
60
40
20
0
1
5
10
FA rate (Hz)
Spikes/sec
% rats reaching criterion
A
0
15
# days after spatial ? visual shift
15
E
D
30
Time (min)
Time (sec)
F
non-target
Time
Spikes/sec
Spikes/sec
target
Time (sec)
Time
Figure 1: NE activity and effects. (A) Rats solve a sequential decision problem in a linear
maze. When the relevant cues are switched after a few days of learning (from spatial to
visual), rats with pharmacologically boosted NE (?idazoxan?) learn to use the new set of
cues faster than the controls. Adapted from [9]. (B) In a vigilance task, monkeys respond
to rare targets and ignore common distractor stimuli. The trace shows the activity of a
single NE neuron in the locus coeruleus (LC) around the time of a target-distractor reversal (vertical line). Tonic activity is elevated for a considerable period. Adapted from [2].
(C) Correlation between the gross fluctuations in the tonic activity of a single NE neuron
(upper) and performance in the task (lower, measured by false alarm rate). Adapted from
[20]. (D) Single NE cells are activated on a phasic time-scale stimulus locked (vertical line)
to the target (upper plot) and not the distractor (lower plot). Adapted from [16]. (E) The
average responses of a large number of norepinephrine cells (over a total of 41,454 trials)
stimulus locked (vertical line) to targets or distractors, sorted by the nature and rectitude
of the response. The asterisk marks (similar) early activation of the neurons by the stimulus. Adapted from [16]. (F) In a GO/NO-GO olfactory discrimination task for rats, single
units are activated by the target odor (and not by the distractor odor), but are temporally
much more tightly locked to the response (right) than the stimulus (left). Trials are ordered
according to the time between stimulus (blue) and response (red). Adapted from [4].
However, Figures 1D;E;F, along with other substantial neurophysiological data on the activity of NE neurons [18, 4], show NE neurons have phasic response properties that lie
outside this model. The data in Figure 1D;E come from a vigilance task [1], in which
subjects can gain reward by reacting to a rare target (a rectangle oriented one way), while
ignoring distractors (a rectangle oriented in the orthogonal direction). Under these circumstances, NE is consistently activated by the target and not the distractor (Figure 1D). There
are also clear correlations in the magnitude of the NE activity and the nature of a trial: hit,
miss, false alarm, correct reject (Figure 1E). It is known that the activity is weaker if the targets are more common [17] (though the lack of response to rare distractors shows that NE
is not driven by mere rarity), and disappears if no action need be taken in response to the
target [18]. In fact, the signal is more tightly related in time to the subsequent action than
the preceding stimulus (Figure 1F). The signal has been qualitatively described in terms of
influencing or controlling the allocation of behavioral or cognitive resources [20, 4].
Since it arises on every trial in an extremely well-learned task with stable stimulus contingencies, this NE signal clearly cannot be indicating unpredicted task changes. Brown et
al [5] have recently made the seminal suggestion that it reports changes in the statistical
structure of the input (stimulus-present versus stimulus-absent) to decision-making circuits
that are involved in initiating differential responding to distinct target stimuli. A statistically necessary consequence of the change in the input structure is that afferent information
should be integrated differently: sensory responses should be ignored if no target is present,
but taken seriously otherwise. Their suggestion is that NE, by changing the gain of neurons
in the decision-making circuit, has exactly this optimizing effect.
In this paper, we argue for a related, but distinct, notion of phasic NE, suggesting that it
reports on unexpected state changes within a task. This is a significant, though natural,
extension of its role in reporting unexpected task changes [25]. We demonstrate that it accounts well for the neurophysiological data. In agreement with the various accounts of the
effects of phasic NE, we consider its role as a form of internal interrupt signal [6]. Computers use interrupts to organize the correct handling of internal and external events such
as timers or peripheral input. Higher-level programs specify what interrupts are allowed
to gain control, and the consequences thereof. We argue that phasic NE is the medium for
a somewhat similar neural interrupt, allowing the correct handling of statistically atypical
events. This notion relates comfortably to many existing views of phasic NE, and provides
a computational correlate for quantitative models.
2
The Model
Figure 2A illustrates a simple hidden Markov generative model (HMM) of the vigilance
task in Figure 1B-E. The (start) state models the condition established when the monkey fixates the light and initiates a trial. Following a somewhat variable delay, either the
target (target) or the distractor (distractor) is presented, and the monkey must respond
appropriately (release a continuously depressed bar for target and continue pressing for
distractor) The transition out of start is uniformly distributed between timesteps 6 and 10,
implemented by a time-varying transition function for this node:
?
?1 ? qt st = start
P (st |st?1 = start) = 0.8qt st = distractor
(1)
?
0.2qt st = target
where qt = 1/(11?t) for (6 ? t ? 10) and qt = 0 otherwise. The start and target states are
assumed to be absorbing states (self-transition probability = 1). This transition function
ensures that the stimulus onset has a uniform distribution between 6 and 10 timesteps (and
0 otherwise). Given that a transition out of start (into either target or distractor) takes
place, the probability is .2 for entering target and .8 for start, as in the actual task.
In addition, it is assumed that the node start does not emit observations, while target emits
xt = t with probability ? > 0.5 and d with probability 1 ? ?, and distractor emits xt =
d with probability ? and t with probability 1 ? ?. The transition out of start is evident as
soon as the first d or t is observed, while the magnitude of ? controls the ?confusability? of
the target and distractor states. Figure 2B shows a typical run from this generative model.
The transition into target happens on step 10 (top), and the outputs generated are a mixture
of t and d(middle), with an overall prevalence of t (bottom).
Exact inference on this model can be performed in a manner similar to the forward pass in
a standard HMM:
X
P (st |x1 , . . . , xt ) ? p(xt |st )
P (st |st?1 )P (st?1 |x1 , . . . , xt?1 ) .
(2)
st?1
Because start does not produce outputs, as soon as the first t is observed, the probability of
start plummets to 0. There then ensues an inferential battle between target and distractor,
with the latter having the initial advantage, since its prior probability is 80%.
C
d
s
1?q(t)
start
target
?
1??
1??
0.2 q(t)
0.8 q(t)
?
distract
1.0
outputs
T
D
state
t
d
s
20 cumulative
10 outputs
0
10
output
20
timestep
30
1
0.5
0
1
0.5
0
1
P(target)
0.5
0
10
D
P(start)
P(distract)
20
timestep
30
NE activity
Bt
1.0
Probability
A
5 hit stim
0
5 fa
resp
0
5 miss
0
5 cr
0
10
20
timestep
30
Figure 2: The model. (A) Task is modeled as a hidden Markov model (HMM), with transitions from start to either distractor (probability .8) or target (probability .2). The transitions happen between timesteps 6 and 10 with uniform probability; distractor and target
are absorbing states. The only outputs are from the absorbing states, and the two have overlapping output distributions over t and d with probabilities ? > .5 for their ?own? output (t
for target, and d for distractor), and 1? ? for the other output. (B) Sample run with a transition from start to target at timestep 10 (upper). The outputs favor target once the state has
changed (middle), more clearly shown in the cumulative plot (bottom). (C) Correct probabilistic inference in the task leads to the probabilities for the three states as shown. The
distractor?s initial advantage arises from a base rate effect, as it is the more likely default
transition. (D) Model NE signal for four trials including one for hit (top; same trials as in
B;C), a false alarm (fa), a miss (miss) and a correct rejection (cr). The second vertical line
represents the point at which the decision was taken (target vs. distractor).
Because of the preponderance of transitions to distractor over target, the distractor state
can be thought of as the reference or default state. Evidence against that default state is
a form of unexpected uncertainty within a task, and we propose that phasic NE reports
this uncertainty. More specifically, NE signals P (target|x1 , . . . , xt )/P (target), where
P (target) = .2 is the prior probability of observing a target trial. We assume that a
target-response is initiated when P (st |x1 , . . . , xt ) exceeds 0.95, or equivalently, when
the NE signal exceeds 0.95/P (target). This implies the following intuitive relationship:
the smaller the probability of the non-default state target the greater the NE-mediated
?surprise? signal has to be in order to convince the inferential system that an anomalous
stimulus has been observed. We also assume that if the posterior probability of target
reaches 0.01, then the trial ends with no action (either a cr or a miss). The asymmetry in
the thresholds arises from the asymmetry in the response contingencies of the task. Further,
to model non-inferential errors, we assume that there is probability of 0.0005 per timestep
of releasing the bar after the transition out of start. Once a decision is reached, the NE
signal is set back to baseline (1, for equal prior and posterior) after a delay of 5 timesteps.
Note that the precise form of the mapping from unexpected uncertainty to NE spikes is
rather arbitrary. In particular, there may be a strong non-linearity, such as a thresholded
response profile. For simplicity, we assume a linear mapping between the two.
The NE activity during the start state is also rather arbitrary. Activity is at baseline before
the stimulus comes on, since prior and posterior match when there is no explicit information
from the world. When the stimulus comes on, the divisive normalization makes the activity
go above baseline because although the transition was expected, its occurrence was not
predicted with perfect precision. The magnitude of this activity depends on the precision
of the model of the time of the transition; and the uncertainty in the interval timer. We set
it to a small super-baseline level to match the data.
NE activity
A 4 stim
3
hit
fa
2
1
0
miss
cr
10 20 30 40
Timestep
B5
4
3
2
1
0
resp
10 20 30 40
Timestep
Figure 3: NE activity. (A) NE activity locked to the stimulus onset (ie the transition out of
start). (B) NE activity response-locked to the decision to act, just for hit and fa trials. Note
the difference in scale between the two figures.
3
Results
Figure 2C illustrates the inferential performance of the model for the sample run in Figure 2B;C. When the first t is observed on timestep 10, the probability of start drops to 0 and
the probability of distractor, which has an initial advantage over target due to its higher
probability, eventually loses out to target as the evidence overwhelms the prior. Figure 2D
shows the model?s NE signal for one example each of hit, fa, miss, and cr trials.
Figure 3 presents our main results. Figure 3A shows the average NE signal for the four
classes of responses (hit, false alarm, miss, and correct rejection), time-locked to the start
of the stimulus. These traces should be compared with those in Figure 1E. The basic
form of the rise of the signal in the model is broadly similar to that in the data; as we
have argued, the fall is rather arbitrary. Figure 3B shows the average signal locked to
the time of reaction (for hit and false alarm trials) rather than stimulus onset. As in the
data (Figure 1F), response-locked activities are much more tightly clustered, although this
flatters the model somewhat, since we do not allow for any variability in the response time
as a function of when the probability of state target reaches the threshold. Since the decay
of the signal following a response is unconstrained, the trace terminates when the response
is determined, usually when the probability of target reaches threshold, but also sometimes
when there is an accidental erroneous response.
Figure 4 shows some additional features of the NE signal in this case. Figure 4A compares
the effect of making the discrimination between target and distractor more or less difficult
in the model (upper) and in the data (lower; [16]). As in the data, the stimulus-locked NE
signal is somewhat broader for the more difficult case, since information has to build up
over a longer period. Also as in the data, correct rejections are much less affected than hits.
Figure 4B shows response locked NE. Although it is correctly slightly broader for the more
difficult discrimination, the timing is not quite the same. This is largely due to the lack of
a realistic model tying the defeat of the default state assumption to a behavioral response.
For the easy task (? = 0.675), there were 19% hits, 1.5% false alarms, 1% misses and 77%
correct rejections. For the difficult task (? = 0.65) the main difference was an increase in
the number of misses to 1.5%, largely at the expense of hits. Note that since the NE signal
is calculated relative to the prior likelihood, making target more likely would reduce the
NE signal exactly proportionally. The data certainly hint at such a reduction [17] although
the precise proportionality is not clear.
4
Discussion
The present model of the phasic activity of NE cells is a direct and major extension of
our previous model of tonic aspects of this neuromodulator. The key difference is that
B
C
Spikes/sec
NE activity
A
D5
4
3
4
hit
3
2
2
1
1
cr
0
Time (sec)
Time (sec)
10
20 30
Timestep
40
0
10
20 30
Timestep
40
Figure 4: NE activities and task difficulty. (A) Stimulus-locked LC responses are slower
and broader for a more difficult discrimination; where difficulty is controlled by the similarity of target and distractor stimuli. (B) When aligned to response, LC activities for easy
and difficult discriminations are more similar, although their response in the more difficult
condition is still somewhat attenuated compared to the easy one. Data in A;B adapted from
[16]. (C) Discrimination difficulty in the model is controlled by the parameter ?. When ?
is reduced from 0.675 (easy; solid) to 0.65 (hard; dashed), simulated NE activity also becomes slower and broader when aligned to stimulus. (D) Same traces aligned to response
indicate NE activity in the difficult condition is attenuated in the model.
unexpected uncertainty is now about the state within a current characterization of the task
rather than about the characterization as a whole. These aspects of NE functionality are
likely quite widespread, and allow us to account for a much wider range of data on this
neuromodulator.
In the model, NE activity is explicitly normalized by prior probabilities arising from the
default state transitions in tasks. This is necessary to measure specifically unexpected uncertainty, and explains the decrement in NE phasic response as a function of the target
probability [17]. It is also associated with the small activation to the stimulus onset, although the precise form of this deserves closer scrutiny. For instance, if subjects were to
build a richer model of the statistics of the time of the transition out of the start state, then
we should see this reflected directly in the NE signal even before the stimulus comes on,
for instance related to the inverse of the survival function for the transition. We would also
expect this transition to effect a different NE signature if stimuli were expected during start
that could also be confused with those expected during target and distractor.
If NE indeed reports on the failure of the current state within the model of the task to
account successfully for the observations, then what effect should it have? One useful
way to think about the signal is in terms of an interrupt signal in computers. In these,
a control program establishes a set of conditions (eg keyboard input) under which normal processing should be interrupted, in order that the consequence of the interrupt (eg a
keystroke) can be appropriately handled. Computers have highly centralized processing
architecture, and therefore the interrupt signal only needs a very limited spatial extent to
exert a widespread effect on the course of computation. By contrast, processing in the
brain is highly distributed, and therefore it is necessary for the interrupt signal to have a
widespread distribution, so that the full ramifications of the failure of the current state can
be felt. Neuromodulatory systems are ideal vehicles for the signal.
The interrupt signal should engage mechanisms for establishing the new state, which then
allows a new set of conditions to be established as to which interrupts will be allowed to
occur, and also to take any appropriate action (as in the task we modeled). The interrupt
signal can be expected to be beneficial, for instance, when there is competition between
tasks for the use of neural resources such as receptive fields [8].
Apart from interrupts such as these under sophisticated top-down control, there are also
more basic contingencies from things such as critical potential threats and stressors that
should exert a rapid and dramatic effect on neural processing (these also have computational analogues in signals such as that power is about to fail). The NE system is duly
subject to what might be considered as bottom-up as well as top-down influences [21].
The interrupt-based account is a close relative of existing notions of phasic NE. For instance, NE has been implicated in the process of alerting [23]. The difference from our
account is perhaps the stronger tie in the latter to actual behavioral output. A task with
second-order contingencies may help to differentiate the two accounts. There are also
close relations with theories [20, 5] that suggest how NE may be an integral part of an optimal decisional strategy. These propose that NE controls the gain in competitive decisionmaking networks that implement sequential decision-making [22], essentially by reporting
on the changes in the statistical structure of the inputs induced by stimulus onset. It is also
suggested that a more extreme change in the gain, destabilizing the competitive networks
through explosive symmetry breaking, can be used to freeze or lock-in any small difference
in the competing activities.
The idea that NE can signal the change in the input statistics occasioned by the (temporallyunpredictable) occurrence of the target is highly appealing. However, the statistics of the
input change when either the target or the distractor appears, and so the preference for
responding to the target at the expense of the distractor is strange. The effect of forcing the
decision making network to become unstable, and therefore enforcing a speeded decision
is much closer to an interrupt; but then it is not clear why this signal should decrease as
the target becomes more common. Further, since in the unstable regime, the statistical
optimality of integration is effectively abandoned, the computational appeal of the signal
is somewhat weakened. However, this alternative theory does make an important link to
sequential statistical analysis [22], raising issues about things like thresholds for deciding
target and distractor that should be important foci of future work here too.
Figure 1C shows an additional phenomenon that has arisen in a task when subjects were not
even occasionally taxed with difficult discrimination problems. The overall performance
fluctuates dramatically (shown by the changing false alarm rate), in a manner that is tightly
correlated with fluctuations in tonic NE activity. Periods of high tonic activity are correlated with low phasic activation to the targets (data not shown). Aston-Jones, Cohen and
their colleagues [20, 3] have suggested that NE regulates the balance between exploration
and exploitation. The high tonic phase is associated with the former, with subjects failing
to concentrate on the contingencies that lead to their current rewards in order to search
for stimuli or actions that might be associated with better rewards. Increasing the ease
of interruptability to either external cues or internal state changes, could certainly lead to
apparently exploratory behavior. However, there is little evidence as to how this sort of
exploration is being actively determined, since, for instance, the macroscopic fluctuations
evident in Figure 1C do not arise in response to any experimental contingency. Given the
relationship between phasic and tonic firing, further investigation of these periodic fluctuations and their implications would be desirable.
Finally, in our previous model [24, 25], tonic NE was closely coupled with tonic acetylcholine (ACh), with the latter reporting expected rather than unexpected uncertainty. The
account of ACh should transfer somewhat directly into the short-term contingencies within
a task ? we might expect it to be involved in reporting on aspects of the known variability
associated with each state, including each distinct stimulus state as well as the no-stimulus
state. As such, this ACh signal might be expected to be relatively more tonic than NE (an
effect that is also apparent in our previous work on more tonic interactions between ACh
and NE (eg Figure 2 of [24]). One attractive target for an account along these lines is the
sustained attention task studied by Sarter and colleagues, which involves temporal uncertainty. Performance in this task is exquisitely sensitive to cholinergic manipulation [19],
but unaffected by gross noradrenergic manipulation [15]. We may again expect there to be
interesting part-opponent and part-synergistic interactions between the neuromodulators.
Acknowledgements
We are grateful to Gary Aston-Jones, Sebastien Bouret, Jonathan Cohen, Peter Latham,
Susan Sara, and Eric Shea-Brown for helpful discussions. Funding was from the Gatsby
Charitable Foundation, the EU BIBA project and the ACI Neurosciences Int?egratives et
Computationnelles of the French Ministry of Research.
References
[1] Aston-Jones, G, Rajkowski, J, Kubiak, P & Alexinsky, T (1994). Locus coeruleus neurons in
monkey are selectively activated by attended cues in a vigilance task. J. Neurosci. 14:44674480.
[2] Aston-Jones, G, Rajkowski, J & Kubiak, P (1997). Conditioned responses of monkey locus
coeruleus neurons anticipate Acquisition of discriminative behavior in a vigilance task. Neuroscience 80:697-715.
[3] Aston-Jones, G, Rajkowski, J & Cohen, J (2000). Locus coeruleus and regulation of behavioral
flexibility and attention. Prog. Brain Res. 126:165-182.
[4] Bouret, S & Sara, SJ (2004). Reward expectation, orientation of attention and locus coeruleusmedial frontal cortex interplay during learning. Eur. J. Neurosci. 20:791-802.
[5] Brown, E, Gao, J, Holmes, P, Bogacz, R, Gilzenrat, M & Cohen, JD (2005). Simple neural
networks that optimize decisions. Int. J. Bif. & Chaos, in press.
[6] David Johnson, J (2003). Noradrenergic control of cognition: global attenuation and an interrupt function. Med. Hypoth. 60:689-692.
[7] Dayan, P & Yu, AJ (2001). ACh, uncertainty, and cortical inference. NIPS 2001.
[8] Desimone, R & Duncan, J (1995). Neural mechanisms of selective visual attention. Annual
Reviews in Neuroscience 18:193-222.
[9] Devauges, V & Sara, SJ (1990). Activation of the noradrenergic system facilitates an attentional
shift in the rat. Beh. Brain Res. 39:19-28.
[10] Doya, K (2002). Metalearning and neuromodulation. Neur. Netw. 15:495-506.
[11] Foote, SL, Freedman, R & Oliver, AP (1975). Effects of putative neurotransmitters on neuronal
activity in monkey auditory cortex. Brain Res. 86:229-242.
[12] Freedman, R, Foote, SL & Bloom, FE (1975) Histochemical characterization of a neocortical
projection of the nucleus locus coeruleus in the squirrel monkey. J. Comp. Neurol. 164:209-231.
[13] Jouvet, M (1969). Biogenic amines and the states of sleep. Science 163:32-41.
[14] Marder, E & Thirumalai, V (2002). Cellular, synaptic and network effects of neuromodulation.
Neur. Netw. 15:479-493.
[15] McGaughy, J, Sandstrom, M, Ruland, S, Bruno JP & Sarter, M (1997). Lack of effects of lesions
of the dorsal noradrenergic bundle on behavioral vigilance. Beh. Neurosci. 111:646-652.
[16] Rajkowski, J, Majczynski, H, Clayton, E & Aston-Jones, G (2004). Activation of monkey
locus coeruleus neurons varies with difficulty and performance in a target detection task. J.
Neurophysiol. 92:361-371.
[17] Rajkowski, J, Majczynski, H, Clayton, E, Cohen, JD & Aston-Jones, G (2002). Phasic activation of monkey locus coeruleus (LC) neurons with recognition of motivationally relevant
stimuli. Society for Neuroscience, Abstracts 86.10.
[18] Sara, SJ & Segal, M (1991). Plasticity of sensory responses of locus coeruleus neurons in the
behaving rat: implications for cognition. Prog. Brain Res. 88:571-585.
[19] Turchi, J & Sarter, M (2001). Bidirectional modulation of basal forebrain NMDA receptor
function differentially affects visual attention but not visual discrimination performance. Neuroscience 104:407-417.
[20] Usher, M, Cohen, JD, Servan-Schreiber, D, Rajkowski, J & Aston-Jones, G (1999). The role of
locus coeruleus in the regulation of cognitive performance. Science 283:549-554.
[21] Van Bockstaele, EJ, Chan, J & Pickel, VM (1996). Input from central nucleus of the amygdala
efferents to pericoerulear dendrites, some of which contain tyrosine hydroxylase immunoreactivity. Journal of Neuroscience Research 45:289-302.
[22] Wald, A (1947). Sequential Analysis. New York, NY: John Wiley & Sons.
[23] Witte, EA & Marrocco, RT (1997). Alteration of brain noradrenergic activity in rhesus monkeys
affects the alerting component of covert orienting. Psychopharmacology 132:315-323.
[24] Yu, AJ & Dayan, P (2003). Expected and unexpected uncertainty. ACh and NE in the neocortex.
NIPS 2002.
[25] Yu, AJ & Dayan, P (2005). Uncertainty, neuromodulation, and attention. Neuron 46, 681-692.
| 2752 |@word noradrenergic:6 exploitation:1 middle:2 briefly:1 trial:12 stronger:1 proportionality:1 rhesus:1 attended:1 dramatic:1 solid:1 reduction:1 initial:3 seriously:1 reaction:2 existing:2 timer:2 current:4 activation:6 must:1 john:1 interrupted:1 subsequent:1 happen:1 realistic:1 plasticity:1 plot:3 drop:1 discrimination:8 v:1 cue:4 generative:2 short:4 provides:1 characterization:3 putatively:1 node:2 preference:1 along:2 direct:1 differential:1 become:1 sustained:1 behavioral:5 olfactory:1 manner:2 pharmacologically:1 indeed:1 expected:8 rapid:1 behavior:3 distractor:27 brain:7 bif:1 initiating:1 actual:2 metaphor:1 little:1 increasing:1 becomes:2 project:2 confused:1 linearity:1 psychopharmacology:1 circuit:2 medium:3 what:5 tying:1 bogacz:1 monkey:10 proposing:1 nj:1 temporal:2 quantitative:1 every:1 act:2 attenuation:1 tie:1 exactly:2 hit:12 uk:2 control:7 unit:2 scrutiny:1 organize:1 before:2 understood:1 influencing:1 timing:1 consequence:3 receptor:1 initiated:1 establishing:1 reacting:1 fluctuation:4 firing:1 ap:1 modulation:1 might:4 exert:2 weakened:1 studied:1 sara:4 biba:1 ease:1 limited:1 locked:11 statistically:2 range:1 speeded:1 rajkowski:6 implement:1 prevalence:1 reject:1 thought:1 inferential:4 destabilizing:1 projection:1 suggest:1 cannot:1 close:2 synergistic:1 scheduling:1 context:1 influence:2 seminal:1 optimize:1 center:1 go:3 attention:8 simplicity:1 rule:1 d5:1 holmes:1 notion:5 exploratory:1 resp:2 target:58 controlling:1 massive:1 exact:1 engage:1 agreement:1 recognition:1 observed:4 role:5 bottom:3 susan:1 ensures:1 eu:1 decrease:1 gross:3 substantial:1 environment:1 reward:4 signature:1 tyrosine:1 grateful:1 overwhelms:1 gilzenrat:1 eric:1 neurophysiol:1 bouret:2 differently:1 various:3 neurotransmitter:1 distinct:3 london:2 outside:1 quite:2 richer:1 fluctuates:1 solve:1 apparent:1 otherwise:3 favor:1 statistic:3 think:1 plummet:1 advantage:3 pressing:1 differentiate:1 interplay:1 motivationally:1 ucl:1 propose:2 interaction:2 adaptation:1 relevant:2 aligned:3 ramification:1 flexibility:1 intuitive:1 fixates:1 competition:1 differentially:1 ach:6 asymmetry:2 decisionmaking:1 produce:1 perfect:1 wider:1 help:1 ac:1 measured:1 qt:5 strong:1 implemented:1 predicted:1 involves:1 indicate:2 come:4 implies:1 direction:1 concentrate:1 closely:1 correct:8 functionality:1 exploration:2 explains:1 argued:1 clustered:1 investigation:1 anticipate:1 promiscuity:1 extension:2 squirrel:1 around:2 hall:1 considered:2 normal:1 plagued:1 deciding:1 mapping:2 cognition:2 major:1 early:1 tonically:1 failing:1 sensitive:1 schreiber:1 successfully:1 establishes:1 clearly:2 super:1 reaching:1 rather:6 cr:6 ej:1 boosted:1 varying:1 broader:4 acetylcholine:1 immunoreactivity:1 release:1 interrupt:17 focus:3 consistently:1 likelihood:1 contrast:1 baseline:4 helpful:1 inference:4 dayan:5 entire:1 integrated:1 bt:1 hidden:2 relation:1 selective:1 overall:2 issue:1 orientation:1 spatial:3 integration:1 equal:1 once:2 field:1 having:2 represents:1 yu:4 jones:8 arborizations:1 future:1 report:6 stimulus:31 hint:1 few:1 oriented:2 tightly:4 individual:1 familiar:1 phase:1 saline:1 explosive:1 detection:2 centralized:1 regulating:1 decisional:1 highly:3 certainly:2 cholinergic:2 mixture:1 extreme:1 light:1 activated:5 sarter:3 occasioned:1 wc1n:1 implication:2 bundle:1 emit:1 integral:1 closer:2 desimone:1 necessary:3 oliver:1 orthogonal:1 re:4 theoretical:2 instance:6 ar:1 servan:1 queen:1 deserves:1 rare:3 uniform:2 delay:2 johnson:1 too:1 ensues:1 varies:1 periodic:1 convince:1 st:12 eur:1 kubiak:2 ie:1 probabilistic:1 vm:1 continuously:1 again:1 central:1 neuromodulators:2 vigilance:8 slowly:1 cognitive:3 external:2 actively:1 suggesting:1 account:9 potential:1 segal:1 alteration:1 sec:9 int:2 explicitly:1 afferent:1 onset:5 depends:1 performed:1 view:1 vehicle:1 observing:1 apparently:1 red:1 start:22 reached:1 competitive:2 sort:1 square:1 largely:2 duly:1 critically:1 mere:1 comp:1 drive:1 unaffected:1 metalearning:1 reach:3 synaptic:1 against:1 failure:2 colleague:2 acquisition:1 involved:4 mcgaughy:1 thereof:1 associated:4 efferent:1 gain:6 emits:2 auditory:1 treatment:2 distractors:3 exerting:1 nmda:1 sophisticated:1 ea:1 back:1 appears:1 bidirectional:1 higher:2 day:2 reflected:1 response:29 specify:1 taxed:1 though:2 just:1 correlation:2 overlapping:1 lack:3 widespread:4 french:1 mode:1 aj:3 perhaps:1 orienting:1 usa:1 effect:17 brown:3 normalized:1 contain:1 preponderance:1 former:1 entering:1 eg:3 attractive:1 during:4 self:1 rat:6 criterion:1 evident:2 neocortical:1 demonstrate:1 latham:1 covert:1 chaos:1 recently:1 funding:1 common:3 absorbing:3 regulates:1 cohen:6 jp:1 defeat:1 extend:1 elevated:2 comfortably:1 significant:1 freeze:1 neuromodulatory:3 unconstrained:1 longevity:1 depressed:1 bruno:1 stable:1 longer:2 cortex:3 operating:1 similarity:1 base:1 behaving:1 posterior:3 own:1 chan:1 optimizing:1 driven:1 apart:1 forcing:1 keyboard:1 occasionally:1 manipulation:2 continue:1 ministry:1 greater:1 somewhat:7 preceding:1 additional:2 period:3 signal:35 dashed:1 relates:1 full:1 desirable:1 exceeds:2 faster:1 match:2 labile:1 controlled:2 prediction:1 anomalous:1 basic:2 wald:1 circumstance:1 essentially:1 expectation:1 normalization:1 sometimes:1 arisen:1 sleeping:1 cell:4 addition:1 interval:1 macroscopic:1 appropriately:2 releasing:1 usher:1 hz:1 subject:5 induced:1 med:1 thing:2 facilitates:1 axonal:1 ideal:1 easy:4 affect:2 timesteps:4 architecture:1 competing:1 reduce:1 idea:2 attenuated:2 shift:2 absent:1 motivated:1 handled:1 b5:1 peter:2 york:1 action:5 ignored:1 useful:1 dramatically:1 clear:3 proportionally:1 neocortex:1 reduced:1 sl:2 amine:1 neuroscience:7 arising:3 per:1 correctly:1 blue:1 broadly:1 affected:1 threat:1 key:4 four:2 basal:1 threshold:4 changing:3 thresholded:1 bloom:1 rectangle:2 timestep:10 run:3 inverse:1 uncertainty:17 respond:2 reporting:4 throughout:1 place:1 strange:1 prog:2 doya:1 putative:1 decision:10 duncan:1 accidental:1 sleep:1 annual:1 activity:32 adapted:7 occur:1 marder:1 felt:1 aspect:4 speed:1 min:1 extremely:1 optimality:1 relatively:1 according:1 neur:2 unpredicted:2 peripheral:1 turchi:1 witte:1 battle:1 smaller:1 terminates:1 slightly:1 beneficial:1 son:1 appealing:1 making:6 happens:1 handling:2 taken:3 resource:2 previously:1 forebrain:1 eventually:1 mechanism:2 fail:1 phasic:14 mind:1 locus:11 initiate:1 neuromodulation:3 reversal:1 end:1 opponent:1 appropriate:1 ubiquity:1 occurrence:2 alternative:1 odor:2 slower:2 jd:3 abandoned:1 angela:1 responding:2 top:4 lock:1 giving:1 build:2 society:1 spike:6 strategy:2 fa:6 receptive:1 rt:1 evolutionary:1 reversed:1 link:1 attentional:1 simulated:1 hmm:3 argue:2 extent:1 unstable:2 cellular:1 enforcing:1 stim:2 phasically:1 modeled:2 relationship:2 ratio:1 balance:1 exquisitely:1 equivalently:1 difficult:9 regulation:2 fe:1 potentially:2 expense:2 marrocco:1 trace:4 rise:1 sebastien:1 allowing:1 upper:4 vertical:4 neuron:17 observation:2 markov:2 tonic:12 variability:3 precise:3 beh:2 arbitrary:3 waking:1 david:1 clayton:2 alerting:2 raising:1 hypoth:1 learned:1 established:2 nip:2 suggested:4 bar:2 usually:1 regime:1 program:2 green:1 including:2 confusability:1 analogue:1 power:1 event:2 critical:1 natural:1 difficulty:4 aston:8 ne:66 temporally:1 disappears:1 mediated:1 coupled:1 prior:7 understanding:1 acknowledgement:1 review:1 relative:2 expect:3 suggestion:2 interesting:1 subcortical:1 allocation:1 versus:1 asterisk:1 foundation:1 contingency:9 switched:1 nucleus:2 charitable:1 collaboration:1 neuromodulator:2 course:1 changed:2 soon:2 implicated:2 weaker:1 understand:1 allow:2 foote:2 fall:1 distributed:2 van:1 calculated:1 amygdala:1 default:6 cortical:2 transition:20 cumulative:2 maze:1 world:1 sensory:2 forward:1 qualitatively:1 made:1 correlate:1 sj:3 ignore:1 netw:2 global:2 assumed:2 discriminative:1 aci:1 search:1 norepinephrine:8 why:1 learn:1 nature:2 transfer:1 ignoring:1 symmetry:1 dendrite:1 distract:2 main:3 neurosci:3 decrement:1 whole:1 noise:2 alarm:7 profile:1 arise:1 freedman:2 allowed:2 lesion:1 x1:4 neuronal:1 rarity:1 gatsby:3 ny:1 wiley:1 lc:4 precision:2 explicit:1 lie:1 atypical:1 breaking:1 minute:2 keystroke:1 erroneous:1 down:2 specific:1 xt:7 appeal:1 decay:1 neurol:1 ajyu:1 coeruleus:10 evidence:3 survival:1 false:7 sequential:4 effectively:1 shea:1 magnitude:3 illustrates:3 conditioned:1 rejection:4 surprise:1 likely:3 gao:1 neurophysiological:2 visual:5 unexpected:9 ordered:1 gary:1 loses:1 environmental:1 sorted:1 considerable:1 change:12 hard:1 typical:1 specifically:2 operates:1 uniformly:1 determined:2 stressor:1 miss:10 total:1 pas:1 experimental:2 divisive:1 indicating:1 selectively:1 college:1 internal:3 mark:1 latter:4 arises:4 jonathan:1 dorsal:1 frontal:1 princeton:3 phenomenon:1 correlated:2 |
1,930 | 2,753 | Nested sampling for Potts models
Iain Murray
Gatsby Computational Neuroscience Unit
University College London
[email protected]
Zoubin Ghahramani
Gatsby Computational Neuroscience Unit
University College London
[email protected]
David J.C. MacKay
Cavendish Laboratory
University of Cambridge
[email protected]
John Skilling
Maximum Entropy
Data Consultants Ltd.
[email protected]
Abstract
Nested sampling is a new Monte Carlo method by Skilling [1] intended for general Bayesian computation. Nested sampling provides a robust alternative to annealing-based methods for computing normalizing constants. It can also generate estimates of other
quantities such as posterior expectations. The key technical requirement is an ability to draw samples uniformly from the prior
subject to a constraint on the likelihood. We provide a demonstration with the Potts model, an undirected graphical model.
1
Introduction
The computation of normalizing constants plays an important role in statistical
inference. For example, Bayesian model comparison needs the evidence, or marginal
likelihood of a model M
Z
Z
Z = p(D|M) =
p(D|?, M)p(?|M) d? ?
L(?)?(?) d?,
(1)
where the model has prior ? and likelihood L over parameters ? after observing data
D. This integral is usually intractable for models of interest. However, given its
importance in Bayesian model comparison, many approaches?both sampling-based
and deterministic?have been proposed for estimating it.
Often the evidence cannot be obtained using samples drawn from either the prior
?, or the posterior p(?|D, M) ? L(?)?(?). Practical Monte Carlo methods need
to sample from a sequence of distributions, possibly at different ?temperatures?
p(?|?) ? L(?)? ?(?) (see Gelman and Meng [2] for a review). These methods are
sometimes cited as a gold standard for comparison with other approximate techniques, e.g. Beal and Ghahramani [3]. However, care is required in choosing intermediate distributions; appropriate temperature-based distributions may be difficult
or impossible to find. Nested sampling provides an alternate standard, which makes
no use of temperature and does not require tuning of intermediate distributions or
other large sets of parameters.
?2
?2
?2
Figure 1: (a) Elements of parame-
?1
L(x)
?1
L(x)
1
8
1
4
1
2
(a)
1 x
?1
L(x)
x3 x2
x1
(b)
1 x
x1 1 x
(c)
ter space (top) are sorted by likelihood and arranged on the x-axis. An
eighth of the prior mass is inside the
innermost likelihood contour in this
figure. (b) Point xi is drawn from
the prior inside the likelihood contour defined by xi?1 . Li is identified
and p({xi }) is known, but exact values of xi are not known. (c) With N
particles, the least likely one sets the
likelihood contour and is replaced by
a new point inside the contour ({Li }
and p({xi }) are still known).
Nested sampling uses a natural definition of Z, a sum over prior mass. The
weighted sum over likelihood elements is expressed as the area under a monotonic
one-dimensional curve ?L vs x? (figure 1(a)), where:
Z
Z 1
Z = L(?)?(?) d? =
L(?(x)) dx.
(2)
0
This is a change of variables dx(?) = ?(?)d?, where each volume element of the prior
in the original ?-vector space is mapped onto a scalar element on the one-dimensional
x-axis. The ordering of the elements on the x-axis is chosen to sort the prior mass
in decreasing order of likelihood values (x1 < x2 ? L(?(x1 )) > L(?(x2 ))). See
appendix A for dealing with elements with identical likelihoods.
Given some points {(xi , Li )}Ii=1 ordered such that xi > xi+1 , the area under the
curve (2) is easily approximated. We denote by Z? estimates obtained using a
trapezoidal rule. Rectangle rules upper and lower bound the error Z? ? Z.
Points with known x-coordinates are unavailable in general. Instead we generate
points, {?i }, such that the distribution p(x) is known (where x ? {xi }), and find
their associated {Li }. A simple algorithm to draw I points is algorithm 1, see also
figure 1(b).
Algorithm 1
Initial point: draw ?1 ? ?(?).
for i = 2 to I: draw ?i ? ?
? (?|L(?i?1 )),
where
?(?) L(?) > L(?i?1 )
?
? (?|L(?i?1 )) ?
0
otherwise.
(3)
Algorithm 2
Initialize: draw N points ?(n) ? ?(?)
for i = 2 to I:
? m = argminn L(?(n) )
? ?i?1 = ?(m)
? draw ?m ? ?
? (?|L(?i?1 )), given
by equation (3)
We know p(x1 ) = Uniform(0, 1), because x is a cumulative sum of prior mass.
Similarly p(xi |xi?1 ) = Uniform(0, xi?1 ), as every point is drawn from the prior
subject to L(?i ) > L(?i?1 ) ? xi < xi?1 . This recursive relation allows us to
compute p(x).
A simple generalization, algorithm 2, uses multiple ? particles; at each step the least
likely is replaced with a draw from a constrained prior (figure 1(c)). Now p(x1 |N ) =
?1
N xN
and subsequent points have p(xi /xi?1 |xi?1 , N ) = N (xi /xi?1 )N ?1 . This
1
1
1e-75
hxi
exp(hlog xi)
error bars
1e-20
1e-85
1e-40
1e-90
xi
1e-60
xi
hxi
exp(hlog xi)
error bars
1e-80
1e-95
1e-100
1e-80
1e-105
1e-100
1e-110
1e-120
0
200 400 600 800 1000 1200 1400 1600 1800 2000
1e-115
1500 1550 1600 1650 1700 1750 1800 1850 1900 1950 2000
i
i
Figure 2: The arithmetic and geometric means of xi against iteration number,? i, for
algorithm 2 with N = 8. Error bars on the geometric mean show exp(?i/N ?
Samples of p(x|N ) are superimposed (i = 1600 . . . 1800 omitted for clarity).
i/N ).
?
distribution over x combined with observations {Li } gives a distribution over Z:
Z
? i }, N ) ? ?(Z(x)
?
?
p(Z|{L
? Z)p(x|N
) dx.
(4)
Samples from the posterior over ? are also available, see Skilling [1] for details.
Nested sampling was introduced by Skilling [1]. The key idea is that samples from
the prior, subject to a nested sequence of constraints (3), give a probabilistic realization of the curve, figure 1(a). Related work can be found in McDonald and
Singer [4]. Explanatory notes and some code are available online1 . In this paper we
present some new discussion of important issues regarding the practical implementation of nested sampling and provide the first application to a challenging problem.
This leads to the first cluster-based method for Potts models with first-order phase
transitions of which we are aware.
2
Implementation issues
2.1
MCMC approximations
The nested sampling algorithm assumes obtaining samples from ?
? (?|L(?i?1 )), equation (3), is possible. Rejection sampling using ? would slow down exponentially with
iteration number i. We explore approximate sampling from ?
? using Markov chain
Monte Carlo (MCMC) methods.
In high-dimensional problems it is likely that the majority of ?
? ?s mass is typically in
a thin shell at the contour surface [5, p37]. This suggests finding efficient chains that
sample at constant likelihood, a microcanonical distribution. In order to complete
an ergodic MCMC method, we also need transition operators that can alter the
likelihood (within the constraint). A simple Metropolis method may suffice.
We must initialize the Markov chain for each new sample somewhere. One possibility is to start at the position of the deleted point, ?i?1 , on the contour constraint,
which is independent of the other points and not far from the bulk of the required
uniform distribution. However, if the Markov chain mixes slowly amongst modes,
the new point starting at ?i?1 may be trapped in an insignificant mode. In this
case it would be better to start at one of the other N ?1 existing points inside the
contour constraint. They are all draws from the correct distribution, ?
? (?|L(?i?1 )),
so represent modes fairly. However, this method may also require many Markov
chain steps, this time to make the new point effectively independent of the point it
cloned.
1
http://www.inference.phy.cam.ac.uk/bayesys/
30
300
20
200
10
100
0
?5
0
?
Figure 3: Histograms of errors in the point estimate log(Z)
0
?5
5
0
(b)
(a)
250
200
150
100
50
0
?5
0
5
(c)
2.2
5
over 1000 random experiments for different approximations.
The test system was a 40-dimensional hypercube of length
100 with uniform prior centered on the origin. The loglikelihood was L = ??> ?/2. Nested sampling used N = 10,
I = 2000. (a) Monte Carlo estimation (equation (5)) using
S = 12 sampled trajectories (b) S = 1200 sampled trajectories. (c) Deterministic approximation using the geometric mean trajectory. In this example perfect integration over
? Therep(x|N ) gives a distribution of width ? 3 over log(Z).
fore, improvements over (c) for approximating equation (5)
are unwarranted.
Integrating out x
To estimate quantities of interest, we average over p(x|N ), as in equation (4). The
? can be found by simple Monte Carlo estimation:
mean of a distribution over log(Z)
Z
S
1X
?
? (s) ))
log(Z) ? log(Z(x))p(x|N
) dx ?
log(Z(x
x(s) ? p(x|N ). (5)
S s=1
This scheme is easily implemented for any expectation under p(x|N ), including
? To reduce noise in comparisons between
error bars from the variance of log(Z).
runs it is advisable to reuse the same samples from p(x|N ) (e.g. clamp the seed
used to generate them).
A simple deterministic approximation is useful for understanding, and also provides fast to compute, low variance estimators. Figure 2 shows sampled trajectories
of xi as the algorithm progresses. The geometric mean path, xi ?
R
exp( p(xi |N ) log xi dxi ) = e?i/N , follows the path of typical settings of x. Using this single x setting is a reasonable and very cheap alternative to averaging over
settings (equation 5); see figure 3.
? is dominated by a small
Typically the trapezoidal estimate of the integral, Z,
?
number of trapezoids,
around
iteration
i
say.
Considering
uncertainty on just
?
?
?
?
log xi = ?i /N ? i /N provides reasonable and convenient error bars.
3
Potts Models
The Potts model, an undirected graphical model, defines a probability distribution
over discrete variables s = (s1 , . . . , sn ), each taking on one of q distinct ?colors?:
X
1
P (s|J, q) =
exp
J(?si sj ? 1) .
(6)
ZP (J, q)
(ij)?E
The variables exist as nodes on a graph where (ij) ? E means that nodes i and j
are linked by an edge. The Kronecker delta, ?si sj is one when si and sj are the
same color and zero otherwise. Neighboring nodes pay an ?energy penalty? of J
when they are different colors. Here we assume identical positive couplings J > 0
on each edge (section 4 discusses the extension to different Jij ). The Ising model
and Boltzmann machine are both special cases of the Potts model with q = 2.
Our goal is to compute the normalization constant ZP (J, q), where the discrete
variables s are the ? variables that need to be integrated (i.e. summed) over.
3.1
Swendsen?Wang sampling
We will take advantage of the ?Fortuin-Kasteleyn-Swendsen-Wang? (FKSW) joint
distribution identified explicitly in Edwards and Sokal [6] over color variables s and
a bond variable for each edge in E, dij ? {0, 1}:
Y
1
P (s, d) =
(1 ? p)?dij ,0 + p?dij ,1 ?si ,sj ,
p ? (1 ? e?J ). (7)
ZP (J, q)
(ij)?E
The marginal distribution over s in the FKSW model is the Potts distribution,
equation (6). The marginal distribution over the bonds is the random cluster model
of Fortuin and Kasteleyn [7]:
P (d) =
1
ZP (J, q)
pD (1?p)|E|?D q C(d) =
1
ZP (J, q)
exp(D log(eJ ?1))e?J|E| q C(d) , (8)
where C(d) is thePnumber of connected components in a graph with edges wherever
dij = 1, and D = (ij)?E dij . As the partition functions of equations 6, 7 and 8 are
identical, we should consider using any of these distributions to compute ZP (J, q).
The algorithm of Swendsen and Wang [8] performs block Gibbs sampling on the
joint model by alternately sampling from P (dij |s) and P (s|dij ). This can convert
a sample from any of the three distributions into a sample from one of the others.
3.2
Nested Sampling
A simple approximate nested sampler uses a fixed number of Gibbs sampling updates of ?
? . Cluster-based updates are also desirable in these models. Focusing on
the random cluster model, we rewrite equation (8):
1
L(d)?(d) where
ZN
L(d) = exp(D log(eJ ? 1)),
P (d) =
(9)
1 C(d)
ZP (J, q)
?(d) =
ZN =
exp(J|E|),
q
.
Z?
Z?
Likelihood thresholds are thresholds on the total number of bonds D. Many states
have identical D, which requires careful treatment, see appendix A. Nested sampling
on this system will give the ratio of ZP /Z? . The prior normalization, Z? , can be
found from the partition function of a Potts system at J = log(2).
The following steps give two MCMC operators to change the bonds d ? d0 :
1. Create a random coloring, s, uniformly from the q C(d) colorings satisfying
the bond constraints d, as in the Swendsen?Wang
algorithm.
P
2. Count sites that allow bonds, E = (ij)?E ?si ,sj .
P
3. Either, operator 1: record the number of bonds D0 = (ij)?E dij
Or, operator 2: draw D0 from Q(D0 |E(s)) ? E(s)
D0 .
4. Throw away the old bonds, d, and pick uniformly from one of the E(s)
D0
ways of setting D0 bonds in the E available sites.
The probability of proposing a particular coloring and new setting of the bonds is
1
1
Q(s, d0 |d) = Q(d0 |s, D0 )Q(D0 |E(s))Q(s|d) = E(s) Q(D0 |E(s)) C(d) .
(10)
q
D0
Summing over colorings, the correct Metropolis-Hastings acceptance ratio is:
P
P
0
E(s)
Q(s, d|d0 )
?(d0 )
q C(d ) q C(d)
s Q(D|s)/ D
s
a=
?P
(11)
= C(d) ? C(d0 ) P
= 1,
0
E(s)
0
?(d)
q
q
s Q(s, d |d)
0
s Q(D |s)/
D
Table 1: Partition function results for 16?16 Potts systems (see text for details).
Method
Gibbs AIS
Swendsen?Wang AIS
Gibbs nested sampling
Random-cluster nested sampling
Acceptance ratio
q = 2 (Ising), J = 1
q = 10, J = 1.477
7.1 ? 1.1
7.4 ? 0.1
7.1 ? 1.0
7.1 ? 0.7
7.3
(1.5)
(1.2)
12.2 ? 2.4
14.1 ? 1.8
11.2
regardless of the choice in step 3. The simple first choice solves the difficult problem
of navigating at constant D. The second choice defines an ergodic chain2 .
4
Results
Table 1 shows results on two example systems: an Ising model, q = 1, and a q = 10
Potts model in an difficult parameter regime. We tested nested samplers using
Gibbs sampling and the cluster-based algorithm, annealed importance sampling
(AIS) [9] using both Gibbs sampling and Swendsen?Wang cluster updates. We also
developed an acceptance ratio method [10] based on our representation in equation
(9), which we ran extensively and should give nearly correct results.
Annealed importance sampling (AIS) was run 100 times, with a geometric spacing
of 104 settings of J as the annealing schedule. Nested sampling used N = 100
particles and 100 full-system MCMC updates to approximate each draw from ?
?.
Each Markov chain was initialized at one of the N?1 particles satisfying the current
constraint. In trials using the other alternative (section 2.1) the Gibbs nested
sampler could get stuck permanently in a local maximum of the likelihood, while
the cluster method gave erroneous answers for the Ising system.
AIS performed very well on the Ising system. We took advantage of its performance
in easy parameter regimes to compute Z? for use in the cluster-based nested sampler. However, with a ?temperature-based? annealing schedule, AIS was unable to
give useful answers for the q = 10 system. While nested sampling appears to be
correct within its error bars.
It is known that even the efficient Swendsen?Wang algorithm mixes slowly for Potts
models with q > 4 near critical values of J [11], see figure 4. Typical Potts model
states are either entirely disordered or ordered; disordered states contain a jumble
of small regions with different colors (e.g. figure 4(b)), in ordered states the system
is predominantly one color (e.g. figure 4(d)). Moving between these two phases
is difficult; defining a valid MCMC method that moves between distinct phases
requires knowledge of the relative probability of the whole collections of states in
those phases.
Temperature-based annealing algorithms explore the model for a range of settings
of J and fail to capture the correct behavior near the transition. Despite using
closely related Markov chains to those used in AIS, nested sampling can work in all
parameter regimes. Figure 4(e) shows how nested sampling can explore a mixture
of ordered and disordered phases. By moving steadily through these states, nested
sampling is able to estimate the prior mass associated with each likelihood value.
2
Proof: with finite probability all si are given the same color, then any allowable D0 is
possible, in turn all allowable d0 have finite probability.
?
(a)
?
(b)
(c)
(d)
(e)
Figure 4: Two 256 ? 256, q = 10 Potts models with starting states (a) and (c) were
simulated with 5 ? 106 full-system Swendsen?Wang updates with J = 1.42577. The
corresponding results, (b) and (d) are typical of all the intermediate samples: Swendsen?
Wang is unable to take (a) into an ordered phase, or (c) into a disordered phase, although
both phases are typical at this J. (e) in contrast shows an intermediate state of nested
sampling, which succeeds in bridging the phases.
This behaviour is not possible in algorithms that use J as a control parameter.
The potentials on every edge of the Potts model in this paper were the same. Much
of the formalism above generalizes to allow different edge weights Jij on each edge,
and non-zero biases on each variable. Indeed Edwards and Sokal [6] gave a general
procedure for constructing such auxiliary-variable joint distributions. This generalization would make the model more relevant to MRFs used in other fields (e.g.
computer vision). The challenge for nested sampling remains the invention of effective sampling schemes that keep a system at or near constant energy. Generalizing
step 4 in section 3.2 would be the difficult step.
Other temperatureless Monte Carlo methods exist, e.g. Berg and Neuhaus [12] study
the Potts model using the multicanonical ensemble. Nested sampling has some
unique properties compared to the established method. Formally it has only one
free parameter, N the number of particles. Unless problems with multiple modes
demand otherwise, N = 1 often reveals useful information, and if the error bars on
Z are too large further runs with larger N may be performed.
5
Conclusions
We have applied nested sampling to compute the normalizing constant of a system
that is challenging for many Monte Carlo methods.
? Nested sampling?s key technical requirement, an ability to draw samples
uniformly from a constrained prior, is largely solved by efficient MCMC
methods.
? No complex schedules are required; steady progress towards compact regions of large likelihood is controlled by a single free parameter, N , the
number of particles.
? Multiple particles, a built-in feature of this algorithm, are often necessary
to obtain accurate results.
? Nested sampling has no special difficulties on systems with first order phasetransitions, whereas all temperature-based methods fail.
We believe that nested sampling?s unique properties will be found useful in a variety
of statistical applications.
A
Degenerate likelihoods
The description in section 1 assumed that the likelihood function provides a total
ordering of elements of the parameter space. However, distinct elements dx and
dx0 could have the same likelihood, either because the parameters are discrete, or
because the likelihood is degenerate.
One way to break degeneracies is through a joint model with variables of interest ?
and an independent variable m ? [0, 1]:
P (?, m) = P (?) ? P (m) =
1
1
L(?)?(?) ?
L(m)?(m)
Z
Zm
(12)
where L(m) = 1 + (m ? 0.5), ?(m) = 1 and Zm = 1. We choose such that log()
is smaller than the smallest difference in log(L(?)) allowed by machine precision.
Standard nested sampling is now possible. Assuming we have a likelihood constraint
Li , we need to be able to draw from
0
?(? )?(m0 ) L(?0 )L(m0 ) > Li ,
P (?0 , m0 |?, m, Li ) ?
(13)
0
otherwise.
The additional variable can be ignored except for L(?0 ) = L(?i ), then only m0 > m
are possible. Therefore, the probability of states with likelihood L(?i ) are weighted
by (1 ? m0 ).
References
[1] John Skilling. Nested sampling. In R. Fischer, R. Preuss, and U. von Toussaint,
editors, Bayesian inference and maximum entropy methods in science and engineering,
AIP Conference Proceeedings 735, pages 395?405, 2004.
[2] Andrew Gelman and Xiao-Li Meng. Simulating normalizing constants: from importance sampling to bridge sampling to path sampling. Statist. Sci., 13(2):163?185,
1998.
[3] Matthew J. Beal and Zoubin Ghahramani. The variational Bayesian EM algorithm
for incomplete data: with application to scoring graphical model structures. Bayesian
Statistics, 7:453?464, 2003.
[4] I. R. McDonald and K. Singer. Machine calculation of thermodynamic properties of
a simple fluid at supercritical temperatures. J. Chem. Phys., 47(11):4766?4772, 1967.
[5] David J.C. MacKay. Information Theory, Inference, and Learning Algorithms. CUP,
2003. www.inference.phy.cam.ac.uk/mackay/itila/.
[6] Robert G. Edwards and Alan D. Sokal. Generalization of the Fortuin-KasteleynSwendsen-Wang representation and Monte Carlo algorithm. Phys.Rev. D, 38(6), 1988.
[7] C. M. Fortuin and P. W. Kasteleyn. On the random-cluster model. I. Introduction
and relation to other models. Physica, 57:536?564, 1972.
[8] R. H. Swendsen and J. S. Wang. Nonuniversal critical dynamics in Monte Carlo
simulations. Phys. Rev. Lett., 58(2):86?88, January 1987.
[9] Radford M. Neal. Annealed importance sampling. Statistics and Computing, 11:
125?139, 2001.
[10] Charles H. Bennett. Efficient estimation of free energy differences from Monte Carlo
data. Journal of Computational Physics, 22(2):245?268, October 1976.
[11] Vivek K. Gore and Mark R. Jerrum. The Swendsen-Wang process does not always
mix rapidly. In 29th ACM Symposium on Theory of Computing, pages 674?681, 1997.
[12] Bernd A. Berg and Thomas Neuhaus. Multicanonical ensemble: A new approach to
simulate first-order phase transitions. Phys. Rev. Lett., 68(1):9?12, January 1992.
| 2753 |@word trial:1 cloned:1 simulation:1 innermost:1 pick:1 phy:2 initial:1 existing:1 current:1 si:6 dx:5 must:1 john:2 subsequent:1 partition:3 cheap:1 update:5 v:1 record:1 provides:5 node:3 symposium:1 inside:4 indeed:1 behavior:1 decreasing:1 considering:1 estimating:1 suffice:1 mass:6 developed:1 proposing:1 finding:1 every:2 gore:1 uk:5 control:1 unit:2 positive:1 engineering:1 local:1 despite:1 meng:2 path:3 suggests:1 challenging:2 range:1 practical:2 unique:2 recursive:1 block:1 x3:1 procedure:1 area:2 convenient:1 integrating:1 zoubin:3 get:1 cannot:1 onto:1 gelman:2 operator:4 impossible:1 www:2 deterministic:3 annealed:3 regardless:1 starting:2 ergodic:2 iain:1 rule:2 estimator:1 cavendish:1 coordinate:1 play:1 exact:1 us:3 origin:1 element:8 approximated:1 satisfying:2 trapezoidal:2 ising:5 role:1 wang:12 capture:1 solved:1 region:2 connected:1 ordering:2 ran:1 pd:1 cam:3 dynamic:1 rewrite:1 easily:2 joint:4 distinct:3 fast:1 effective:1 london:2 monte:10 choosing:1 larger:1 loglikelihood:1 say:1 otherwise:4 ability:2 statistic:2 fischer:1 jerrum:1 beal:2 sequence:2 advantage:2 net:1 ucl:2 took:1 clamp:1 jij:2 zm:2 neighboring:1 relevant:1 realization:1 rapidly:1 degenerate:2 gold:1 description:1 cluster:10 requirement:2 zp:8 perfect:1 coupling:1 advisable:1 ac:5 andrew:1 ij:6 progress:2 solves:1 throw:1 implemented:1 auxiliary:1 edward:3 closely:1 correct:5 centered:1 disordered:4 require:2 behaviour:1 generalization:3 neuhaus:2 extension:1 physica:1 around:1 swendsen:11 exp:8 seed:1 matthew:1 m0:5 smallest:1 omitted:1 estimation:3 unwarranted:1 bond:10 bridge:1 create:1 weighted:2 always:1 ej:2 improvement:1 potts:15 likelihood:22 superimposed:1 contrast:1 inference:5 sokal:3 mrfs:1 typically:2 integrated:1 explanatory:1 relation:2 supercritical:1 issue:2 constrained:2 integration:1 mackay:4 initialize:2 marginal:3 fairly:1 aware:1 special:2 field:1 summed:1 sampling:43 identical:4 nearly:1 thin:1 alter:1 others:1 aip:1 kasteleyn:3 replaced:2 intended:1 phase:10 interest:3 acceptance:3 possibility:1 mixture:1 chain:7 accurate:1 integral:2 edge:7 necessary:1 unless:1 incomplete:1 old:1 initialized:1 formalism:1 zn:2 uniform:4 dij:8 too:1 answer:2 combined:1 cited:1 probabilistic:1 physic:1 von:1 choose:1 possibly:1 slowly:2 li:9 potential:1 explicitly:1 performed:2 break:1 observing:1 linked:1 start:2 sort:1 variance:2 largely:1 ensemble:2 bayesian:6 carlo:10 trajectory:4 fore:1 phys:4 definition:1 against:1 energy:3 steadily:1 associated:2 dxi:1 proof:1 degeneracy:1 sampled:3 treatment:1 color:7 knowledge:1 schedule:3 coloring:4 focusing:1 appears:1 arranged:1 just:1 hastings:1 defines:2 mode:4 believe:1 contain:1 laboratory:1 neal:1 vivek:1 width:1 steady:1 allowable:2 complete:1 mcdonald:2 performs:1 temperature:7 variational:1 charles:1 predominantly:1 exponentially:1 volume:1 cambridge:1 gibbs:7 ai:7 cup:1 tuning:1 similarly:1 particle:7 hxi:2 moving:2 surface:1 posterior:3 scoring:1 additional:1 care:1 ii:1 arithmetic:1 thermodynamic:1 multiple:3 mix:3 desirable:1 d0:18 full:2 alan:1 technical:2 calculation:1 controlled:1 vision:1 expectation:2 iteration:3 sometimes:1 represent:1 histogram:1 normalization:2 whereas:1 spacing:1 annealing:4 subject:3 undirected:2 near:3 ter:1 intermediate:4 easy:1 variety:1 gave:2 nonuniversal:1 identified:2 reduce:1 regarding:1 idea:1 bridging:1 reuse:1 ltd:1 penalty:1 ignored:1 useful:4 extensively:1 statist:1 mrao:1 generate:3 http:1 exist:2 neuroscience:2 trapped:1 delta:1 bulk:1 discrete:3 key:3 threshold:2 drawn:3 deleted:1 clarity:1 invention:1 rectangle:1 graph:2 sum:3 convert:1 run:3 uncertainty:1 reasonable:2 fortuin:4 draw:12 appendix:2 entirely:1 bound:1 pay:1 constraint:8 kronecker:1 x2:3 dominated:1 simulate:1 alternate:1 smaller:1 em:1 metropolis:2 rev:3 wherever:1 s1:1 online1:1 equation:10 remains:1 discus:1 count:1 fail:2 turn:1 singer:2 know:1 available:3 generalizes:1 away:1 appropriate:1 simulating:1 skilling:6 alternative:3 permanently:1 original:1 thomas:1 top:1 assumes:1 graphical:3 somewhere:1 ghahramani:3 murray:2 approximating:1 hypercube:1 move:1 quantity:2 amongst:1 navigating:1 unable:2 mapped:1 simulated:1 sci:1 majority:1 parame:1 assuming:1 code:1 length:1 trapezoid:1 ratio:4 demonstration:1 difficult:5 october:1 hlog:2 proceeedings:1 robert:1 fluid:1 implementation:2 boltzmann:1 upper:1 observation:1 consultant:1 markov:6 finite:2 january:2 defining:1 david:2 introduced:1 bernd:1 required:3 established:1 alternately:1 able:2 bar:7 usually:1 eighth:1 regime:3 challenge:1 built:1 including:1 critical:2 natural:1 difficulty:1 scheme:2 axis:3 sn:1 text:1 prior:16 review:1 geometric:5 understanding:1 relative:1 toussaint:1 xiao:1 editor:1 free:3 bias:1 therep:1 allow:2 taking:1 curve:3 lett:2 xn:1 transition:4 cumulative:1 contour:7 valid:1 stuck:1 collection:1 far:1 sj:5 approximate:4 compact:1 keep:1 dealing:1 reveals:1 summing:1 assumed:1 xi:29 table:2 robust:1 obtaining:1 unavailable:1 complex:1 constructing:1 whole:1 noise:1 allowed:1 x1:6 site:2 gatsby:4 slow:1 precision:1 position:1 down:1 erroneous:1 insignificant:1 normalizing:4 evidence:2 intractable:1 effectively:1 importance:5 demand:1 rejection:1 entropy:2 generalizing:1 likely:3 explore:3 expressed:1 ordered:5 scalar:1 monotonic:1 radford:1 nested:32 acm:1 shell:1 sorted:1 goal:1 careful:1 towards:1 bennett:1 change:2 typical:4 except:1 uniformly:4 averaging:1 sampler:4 total:2 succeeds:1 formally:1 college:2 berg:2 mark:1 dx0:1 chem:1 mcmc:7 tested:1 |
1,931 | 2,754 | Recovery of Jointly Sparse Signals
from Few Random Projections
Michael B. Wakin
ECE Department
Rice University
[email protected]
Marco F. Duarte
ECE Department
Rice University
[email protected]
Dror Baron
ECE Department
Rice University
[email protected]
Shriram Sarvotham
ECE Department
Rice University
[email protected]
Richard G. Baraniuk
ECE Department
Rice University
[email protected]
Abstract
Compressed sensing is an emerging field based on the revelation that a small group
of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing
(DCS) that enables new distributed coding algorithms for multi-signal ensembles
that exploit both intra- and inter-signal correlation structures. The DCS theory rests
on a new concept that we term the joint sparsity of a signal ensemble. We study
three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically
and empirically the number of measurements per sensor required for accurate reconstruction. In some sense DCS is a framework for distributed compression of
sources with memory, which has remained a challenging problem in information
theory for some time. DCS is immediately applicable to a range of problems in
sensor networks and arrays.
1
Introduction
Distributed communication, sensing, and computing [13, 17] are emerging fields with numerous promising applications. In a typical setup, large groups of cheap and individually unreliable nodes may collaborate to perform a variety of data processing tasks such
as sensing, data collection, classification, modeling, tracking, and so on. As individual
nodes in such a network are often battery-operated, power consumption is a limiting factor, and the reduction of communication costs is crucial. In such a setting, distributed
source coding [8, 13, 14, 17] may allow the sensors to save on communication costs. In the
Slepian-Wolf framework for lossless distributed coding [8, 14], the availability of correlated side information at the decoder enables the source encoder to communicate losslessly
at the conditional entropy rate, rather than the individual entropy. Because sensor networks
and arrays rely on data that often exhibit strong spatial correlations [13, 17], distributed
compression can reduce the communication costs substantially, thus enhancing battery life.
Unfortunately, distributed compression schemes for sources with memory are not yet mature [8, 13, 14, 17].
We propose a new approach for distributed coding of correlated sources whose signal correlations take the form of a sparse structure. Our approach is based on another emerging
field known as compressed sensing (CS) [4, 9]. CS builds upon the groundbreaking work
of Cand`es et al. [4] and Donoho [9], who showed that signals that are sparse relative to a
known basis can be recovered from a small number of nonadaptive linear projections onto
a second basis that is incoherent with the first. (A random basis provides such incoherence
with high probability. Hence CS with random projections is universal ? the signals can
be reconstructed if they are sparse relative to any known basis.) The implications of CS
for signal acquisition and compression are very promising. With no a priori knowledge of
a signal?s structure, a sensor node could simultaneously acquire and compress that signal,
preserving the critical information that is extracted only later at a fusion center.
In our framework for distributed compressed sensing (DCS), this advantage is particularly
compelling. In a typical DCS scenario, a number of sensors measure signals that are each
individually sparse in some basis and also correlated from sensor to sensor. Each sensor
independently encodes its signal by projecting it onto another, incoherent basis (such as a
random one) and then transmits just a few of the resulting coefficients to a single collection
point. Under the right conditions, a decoder at the collection point can reconstruct each of
the signals precisely. The DCS theory rests on a concept that we term the joint sparsity of a
signal ensemble. We study in detail three simple models for jointly sparse signals, propose
tractable algorithms for joint recovery of signal ensembles from incoherent projections, and
characterize theoretically and empirically the number of measurements per sensor required
for reconstruction. While the sensors operate entirely without collaboration, joint decoding
can recover signals using far fewer measurements per sensor than would be required for
separable CS recovery. This paper presents our specific results for one of the three models;
the other two are highlighted in our papers [1, 2, 11].
2
Sparse Signal Recovery from Incoherent Projections
In the traditional CS setting, we consider a single signal x ? RN , which we assume to be
sparse in a known orthonormal basis or frame ? = [?1 , ?2 , . . . , ?N ]. That is, x = ??
for some ?, where k?k0 = K holds.1 The signal x is observed indirectly via an M ?
N measurement matrix ?, where M < N . We let y = ?x be the observation vector,
consisting of the M inner products of the measurement vectors against the signal. The M
rows of ? are the measurement vectors, against which the signal is projected. These rows
are chosen to be incoherent with ? ? that is, they each have non-sparse expansions in
the basis ? [4, 9]. In general, ? meets the necessary criteria when its entries are drawn
randomly, for example independent and identically distributed (i.i.d.) Gaussian.
Although the equation y = ?x is underdetermined, it is possible to recover x from y under
certain conditions. In general, due to the incoherence between ? and ?, ? can be recovered
by solving the `0 optimization problem
?b = arg min k?k0 s.t. y = ???.
In principle, remarkably few random measurements are required to recover a K-sparse
signal via `0 minimization. Clearly, more than K measurements must be taken to avoid
ambiguity; in theory, K + 1 random measurements will suffice [2]. Unfortunately, solving
this `0 optimization
problem appears to be NP-hard [6], requiring a combinatorial enumer
N
ation of the K
possible sparse subspaces for ?.
The amazing revelation that supports the CS theory is that a much simpler problem yields
an equivalent solution (thanks again to the incoherence of the bases): we need only solve
1
The `0 ?norm? k?k0 merely counts the number of nonzero entries in the vector ?. CS theory also
applies to signals for which k?kp ? K, where 0 < p ? 1; such extensions for DCS are a topic of
ongoing research.
for the `1 -sparsest vector ? that agrees with the observed coefficients y [4, 9]
?b = arg min k?k1 s.t. y = ???.
This optimization problem, known also as Basis Pursuit (BP) [7], is significantly more
tractable and can be solved with traditional linear programming techniques. There is no
free lunch, however; more than K + 1 measurements will be required in order to recover
sparse signals. In general, there exists a constant oversampling factor c = c(K, N ) such
that cK measurements suffice to recover x with very high probability [4, 9]. Commonly
quoted as c = O(log(N )), we have found that c ? log2 (1 + N/K) provides a useful
rule-of-thumb [2]. At the expense of slightly more measurements, greedy algorithms have
also been developed to recover x from y. One example, known as Orthogonal Matching
Pursuit (OMP) [15], requires c ? 2 ln(N ). We exploit both BP and greedy algorithms for
recovering jointly sparse signals.
3
Joint Sparsity Models
In this section, we generalize the notion of a signal being sparse in some basis to the
notion of an ensemble of signals being jointly sparse. We consider three different joint
sparsity models (JSMs) that apply in different situations. In most cases, each signal is itself
sparse, and so we could use the CS framework from above to encode and decode each one
separately. However, there also exists a framework wherein a joint representation for the
ensemble uses fewer total vectors.
We use the following notation for our signal ensembles and measurement model. Denote
the signals in the ensemble by xj , j ? {1, 2, . . . , J}, and assume that each signal xj ? RN .
We assume that there exists a known sparse basis ? for RN in which the xj can be sparsely
represented. Denote by ?j the measurement matrix for signal j; ?j is Mj ? N and, in
general, the entries of ?j are different for each j. Thus, yj = ?j xj consists of Mj < N
incoherent measurements of xj .
JSM-1: Sparse common component + innovations. In this model, all signals share a
common sparse component while each individual signal contains a sparse innovation component; that is,
xj = zC + zj , j ? {1, 2, . . . , J}
with
zC = ??C , k?C k0 = K
and
zj = ??j , k?j k0 = Kj .
Thus, the signal zC is common to all of the xj and has sparsity K in basis ?. The signals
zj are the unique portions of the xj and have sparsity Kj in the same basis. A practical
situation well-modeled by JSM-1 is a group of sensors measuring temperatures at a number
of outdoor locations throughout the day. The temperature readings xj have both temporal
(intra-signal) and spatial (inter-signal) correlations. Global factors, such as the sun and
prevailing winds, could have an effect zC that is both common to all sensors and structured
enough to permit sparse representation. More local factors, such as shade, water, or animals, could contribute localized innovations zj that are also structured (and hence sparse).
Similar scenarios could be imagined for a network of sensors recording other phenomena
that change smoothly in time and in space and thus are highly correlated.
JSM-2: Common sparse supports. In this model, all signals are constructed from the
same sparse set of basis vectors, but with different coefficients; that is,
xj = ??j , j ? {1, 2, . . . , J},
where each ?j is supported only on the same ? ? {1, 2, . . . , N } with |?| = K. Hence,
all signals have `0 sparsity of K, and all are constructed from the same K basis elements,
but with arbitrarily different coefficients. A practical situation well-modeled by JSM-2
is where multiple sensors acquire the same signal but with phase shifts and attenuations
caused by signal propagation. In many cases it is critical to recover each one of the sensed
signals, such as in many acoustic localization and array processing algorithms. Another
useful application for JSM-2 is MIMO communication [16].
JSM-3: Nonsparse common + sparse innovations. This model extends JSM-1 so that
the common component need no longer be sparse in any basis; that is,
xj = zC + zj , j ? {1, 2, . . . , J}
with
zC = ??C
and
zj = ??j , k?j k0 = Kj ,
but zC is not necessarily sparse in the basis ?. We also consider the case where the supports
of the innovations are shared for all signals, which extends JSM-2. A practical situation
well-modeled by JSM-3 is where several sources are recorded by different sensors together
with a background signal that is not sparse in any basis. Consider, for example, a computer
vision-based verification system in a device production plant. Cameras acquire snapshots
of components in the production line; a computer system then checks for failures in the
devices for quality control purposes. While each image could be extremely complicated,
the ensemble of images will be highly correlated, since each camera is observing the same
device with minor (sparse) variations. JSM-3 could also be useful in some non-distributed
scenarios. For example, it motivates the compression of data such as video, where the
innovations or differences between video frames may be sparse, even though a single frame
may not be very sparse. In general, JSM-3 may be invoked for ensembles with significant
inter-signal correlations but insignificant intra-signal correlations.
4
Recovery of Jointly Sparse Signals
In a setting where a network or array of sensors may encounter a collection of jointly
sparse signals, and where a centralized reconstruction algorithm is feasible, the number
of incoherent measurements required by each sensor can be reduced. For each JSM, we
propose algorithms for joint signal recovery from incoherent projections and characterize
theoretically and empirically the number of measurements per sensor required for accurate
reconstruction. We focus in particular on JSM-3 in this paper but also overview our results
for JSMs 1 and 2, which are discussed in further detail in our papers [1, 2, 11].
4.1 JSM-1: Sparse common component + innovations
For this model (see also [1, 2]), we have proposed an analytical framework inspired by the
principles of information theory. This allows us to characterize the measurement rates Mj
required to jointly reconstruct the signals xj . The measurement rates relate directly to the
signals? conditional sparsities, in parallel with the Slepian-Wolf theory. More specifically,
we have formalized the following intuition. Consider the simple case of J = 2 signals. By
employing the CS machinery, we might expect that (i) (K + K1 )c coefficients suffice to
reconstruct x1 , (ii) (K +K2 )c coefficients suffice to reconstruct x2 , yet only (iii) (K +K1 +
K2 )c coefficients should suffice to reconstruct both x1 and x2 , since we have K + K1 + K2
nonzero elements in x1 and x2 . In addition, given the (K + K1 )c measurements for x1
as side information, and assuming that the partitioning of x1 into zC and z1 is known,
cK2 measurements that describe z2 should allow reconstruction of x2 . Formalizing these
arguments allows us to establish theoretical lower bounds on the required measurement
rates at each sensor; Fig.1(a) shows such a bound for the case of J = 2 signals.
We have also established upper bounds on the required measurement rates Mj by proposing
a specific algorithm for reconstruction [1]. The algorithm uses carefully designed measurement matrices ?j (in which some rows are identical and some differ) so that the resulting
measurements can be combined to allow step-by-step recovery of the sparse components.
The theoretical rates Mj are below those required for separable CS recovery of each signal
xj (see Fig. 1(a)). We also proposed a reconstruction technique based on a single execution of a linear program, which seeks the sparsest components [zC ; z1 ; . . . zJ ] that
1
n = 50, k = 5
1
0.9
0.9
32
16
0.7
Converse
R2
0.6
Anticipated
0.5
Achievable
Simulation
0.4
Separate
0.3
0.2
0.1
0
0
Prob. of exact reconstruction
0.8
8 4
2
0.8
1
0.7
2
4
8
0.6
16
32
0.5
0.4
0.3
0.2
0.1
0.2
0.4
0.6
0.8
1
0
0
5
10
15
20
25
30
R
Number of measurements per sensor
(a)
(b)
Figure 1: (a) Converse bounds and achievable measurement rates for J = 2 signals with common
1
sparse component and sparse innovations (JSM-1). We fix signal lengths N = 1000 and sparsities
K = 200, K1 = K2 = 50. The measurement rates Rj := Mj /N reflect the number of measurements normalized by the signal length. Blue curves indicate our theoretical and anticipated converse
bounds; red indicates a provably achievable region, and pink denotes the rates required for separable
CS signal reconstruction. (b) Reconstructing a signal ensemble with common sparse supports (JSM2). We plot the probability of perfect reconstruction via DCS-SOMP (solid lines) and independent
CS reconstruction (dashed lines) as a function of the number of measurements per signal M and the
number of signals J . We fix the signal length to N = 50 and the sparsity to K = 5. An oracle
encoder that knows the positions of the large coefficients would use 5 measurements per signal.
account for the observed measurements. Numerical simulations support such an approach
(see Fig.1(a)). Future work will extend JSM-1 to `p -compressible signals, 0 < p ? 1.
4.2
JSM-2: Common sparse supports
Under the JSM-2 signal ensemble model (see also [2, 11]), independent recovery of each
signal via `1 minimization would require cK measurements per signal. However, algorithms inspired by conventional greedy pursuit algorithms (such as OMP [15]) can substantially reduce this number. In the single-signal case, OMP iteratively constructs the
sparse support set ?; decisions are based on inner products between the columns of ??
and a residual. In the multi-signal case, there are more clues available for determining the
elements of ?.
To establish a theoretical justification for our approach, we first proposed a simple OneStep Greedy Algorithm (OSGA) [11] that combines all of the measurements and seeks the
largest correlations with the columns of the ?j ?. We established that, assuming that ?j
has i.i.d. Gaussian entries and that the nonzero coefficients in the ?j are i.i.d. Gaussian, then
with M ? 1 measurements per signal, OSGA recovers ? with probability approaching 1
as J ? ?. Moreover, with M ? K measurements per signal, OSGA recovers all xj with
probability approaching 1 as J ? ?. This meets the theoretical lower bound for Mj .
In practice, OSGA can be improved using an iterative greedy algorithm. We proposed a
simple variant of Simultaneous Orthogonal Matching Pursuit (SOMP) [16] that we term
DCS-SOMP [11]. For this algorithm, Fig. 1(b) plots the performance as the number of
sensors varies from J = 1 to 32. We fix the signal lengths at N = 50 and the sparsity of
each signal to K = 5. With DCS-SOMP, for perfect reconstruction of all signals the average number of measurements per signal decreases as a function of J. The trend suggests
that, for very large J, close to K measurements per signal should suffice. On the contrary,
with independent CS reconstruction, for perfect reconstruction of all signals the number of
measurements per sensor increases as a function of J. This surprise is due to the fact that
each signal will experience an independent probability p ? 1 of successful reconstruction;
therefore the overall probability of complete success is pJ . Consequently, each sensor must
compensate by making additional measurements.
4.3 JSM-3: Nonsparse common + sparse innovations
The JSM-3 signal ensemble model provides a particularly compelling motivation for joint
recovery. Under this model, no individual signal xj is sparse, and so separate signal recovery would require fully N measurements per signal. As in the other JSMs, however, the
commonality among the signals makes it possible to substantially reduce this number.
Our recovery algorithms are based on the observation that if the common component zC
were known, then each innovation zj could be estimated using the standard single-signal
CS machinery on the adjusted measurements yj ??j zC = ?j zj . While zC is not known in
advance,
it can be estimated from the measurements. In fact, across all J sensors, a total of
P
j Mj random projections of zC are observed (each corrupted by a contribution from one
of the zj ). Since zC is not sparse, it cannot be P
recovered via CS techniques, but when the
number of measurements is sufficiently large ( j Mj N ), zC can be estimated using
standard tools from linear algebra. A key requirement for such a method to succeed in
recovering zC is that each ?j be different, so that their rows combine to span all of RN . In
the limit, zC can be recovered while still allowing each sensor to operate at the minimum
measurement rate dictated by the {zj }. A prototype algorithm, which we name Transpose
Estimation of Common Component (TECC), is listed below, where we assume that each
measurement matrix ?j has i.i.d. N (0, ?j2 ) entries.
TECC Algorithm for JSM-3
b as the concatenation of the regu1. Estimate common component: Define the matrix ?
1
b = [?
b 1, ?
b 2, . . . , ?
b J ].
b
larized individual measurement matrices ?j = Mj ?2 ?j , that is, ?
j
Calculate the estimate of the common component as zc
C =
1 bT
J ? y.
2. Estimate measurements generated by innovations: Using the previous estimate, subtract the contribution of the common part on the measurements and generate estimates
for the measurements caused by the innovations for each signal: ybj = yj ? ?j zc
C.
3. Reconstruct innovations: Using a standard single-signal CS reconstruction algorithm,
obtain estimates of the innovations zbj from the estimated innovation measurements ybj .
4. Obtain signal estimates: Sum the above estimates, letting x
bj = zc
bj .
C +z
The following theorem shows that asymptotically, by using the TECC algorithm, each
sensor need only measure at the rate dictated by the sparsity Kj .
Theorem 1 [2] Assume that the nonzero expansion coefficients of the sparse innovations
zj are i.i.d. Gaussian random variables and that their locations are uniformly distributed
on {1, 2, ..., N }. Then the following statements hold:
1. Let the measurement matrices ?j contain i.i.d. N (0, ?j2 ) entries with Mj ? Kj +
1. Then each signal xj can be recovered using the TECC algorithm with probability
approaching 1 as J ? ?.
2. Let ?j be a measurement matrix with Mj ? Kj for some j ? {1, 2, ..., J}. Then with
probability 1, the signal xj cannot be uniquely recovered by any algorithm for any J.
For large J, the measurement rates permitted by Statement 1 are the lowest possible for any
reconstruction strategy on JSM-3 signals, even neglecting the presence of the nonsparse
component. Thus, Theorem 1 provides a tight achievable and converse for JSM-3 signals.
The CS technique employed in Theorem 1 involves combinatorial searches for estimating
the innovation components. More efficient techniques could also be employed (including
several proposed for CS in the presence of noise [3, 5, 7, 10, 12]).
While Theorem 1 suggests the theoretical gains from joint recovery as J ? ?, practical
gains can also be realized with a moderate number of sensors. For example, suppose in
the TECC algorithm that the initial estimate zc
C is not accurate enough to enable correct
identification of the sparse innovation supports {?j }. In such a case, it may still be possible
for a rough approximation of the innovations {zj } to help refine the estimate zc
C . This in
turn could help to refine the estimates of the innovations. Since each component helps to
estimate the others, we propose an iterative algorithm for JSM-3 recovery. The Alternating
Common and Innovation Estimation (ACIE) algorithm exploits the observation that once
the basis vectors comprising the innovation zj have been identified in the index set ?j ,
their effect on the measurements yj can be removed to aid in estimating zC .
ACIE Algorithm for JSM-3
b j = ? for each j. Set the iteration counter ` = 1.
1. Initialize: Set ?
b
2. Estimate common component: Let ?j,?
b j be the Mj ? |?j | submatrix obtained
b j from ?j and construct an Mj ? (Mj ? |?
b j |) matrix
by sampling the columns ?
Qj = [qj,1 . . . qj,Mj ?|?
]
having
orthonormal
columns
that
span
the
orthogonal
comb j|
plement of colspan(?j,?
b j ). Remove the projection of the measurements into the aforeb j , letting
mentioned span to obtain measurements caused exclusively by vectors not in ?
e j = QT ?j . Use the modified measurements Ye = yeT yeT . . . yeT T
yej = QTj yj and ?
1 2
j
J
h
iT
e = ?
eT ?
eT . . . ?
eT
and modified holographic basis ?
to refine the estimate of the
1
2
J
e? e
measurements caused by the common part of the signal, setting zf
C = ? Y , where
A? = (AT A)?1 AT denotes the pseudoinverse of matrix A.
3. Estimate innovation supports: For each signal j, subtract zf
C from the measurements,
b
ybj = yj ? ?j zf
C , and estimate the sparse support of each innovation ?j .
4. Iterate: If ` < L, a preset number of iterations, then increment ` and return to Step 2.
Otherwise proceed to Step 5.
5. Estimate innovation coefficients: For each signal j, estimate the coefficients for the
b b is a sampled version of
b j , setting ?b b = ?? (yj ? ?j zf
indices in ?
C ), where ?j,?
b
j,?j
j
j,?
j
the innovation?s sparse coefficient vector estimate ?bj .
b
6. Reconstruct signals: Estimate each signal as x
bj = zf
bj = zf
C +z
C + ?j ?j .
b j = ?j ), the measurements
In the case where the innovation support estimate is correct (?
yej will describe only the common component
z
.
If
this
is
true
for every signal j and the
C
P
number of remaining measurements j Mj ?KJ ? N , then zC can be perfectly recovered
in Step 2. Because it may be difficult to correctly obtain all ?j in the first iteration, we find
it preferable to run the algorithm for several iterations.
Fig. 2(a) shows that, for sufficiently large J, we can recover all of the signals with significantly fewer than N measurements per signal. We note the following behavior in the graph.
First, as J grows, it becomes more difficult to perfectly reconstruct all J signals. We believe this is inevitable, because even if zC were known without error, then perfect ensemble
recovery would require the successful execution of J independent runs of OMP. Second,
for small J, the probability of success can decrease at high values of M . We believe this is
due to the fact that initial errors in estimating zC may tend to be somewhat sparse (since zc
C
roughly becomes an average of the signals {xj }), and these sparse errors can mislead the
subsequent OMP processes. For more moderate M , it seems that the errors in estimating
zC (though greater) tend to be less sparse. We expect that a more sophisticated algorithm
could alleviate such a problem, and we note that the problem is also mitigated at higher J.
Fig. 2(b) shows that when the sparse innovations share common supports we see an even
greater savings. As a point of reference, a traditional approach to signal encoding would
require 1600 total measurements to reconstruct these J = 32 nonsparse signals of length
N = 50. Our approach requires only about 10 per sensor for a total of 320 measurements.
1
0.9
Probability of Exact Reconstruction
Probability of Exact Reconstruction
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
8
16
32
0.1
0
0
10
20
30
40
0.8
0.7
0.6
0.5
0.4
0.3
0.2
8
16
32
0.1
50
0
0
5
10
15
20
25
30
35
Number of Measurements per Signal, M
Number of Measurements per Signal, M
(a)
(b)
Figure 2: Reconstructing a signal ensemble with nonsparse common component and sparse inno-
vations (JSM-3) using ACIE. (a) Reconstruction using OMP independently on each signal in Step 3
of the ACIE algorithm (innovations have arbitrary supports). (b) Reconstruction using DCS-SOMP
jointly on all signals in Step 3 of the ACIE algorithm (innovations have identical supports). Signal
length N = 50, sparsity K = 5. The common structure exploited by DCS-SOMP enables dramatic
savings in the number of measurements. We average over 1000 simulation runs.
Acknowledgments: Thanks to Emmanuel Cand`es, Hyeokho Choi, and Joel Tropp for informative and inspiring conversations.
References
[1] D. Baron, M. F. Duarte, S. Sarvotham, M. B. Wakin, and R. G. Baraniuk. An informationtheoretic approach to distributed compressed sensing. In Allerton Conf. Comm., Control, Comput., Sept. 2005.
[2] D. Baron, M. B. Wakin, M. F. Duarte, S. Sarvotham, and R. G. Baraniuk. Distributed compressed sensing. 2005. Preprint. Available at www.dsp.rice.edu/cs.
[3] E. Cand`es, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate
measurements. Comm. Pure Applied Mathematics, 2005. To appear.
[4] E. Cand`es and T. Tao. Near optimal signal recovery from random projections and universal
encoding strategies. 2004. Preprint.
[5] E. Cand`es and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than
n. 2005. Preprint.
[6] E. Cand`es and T. Tao. Error correction via linear programming. 2005. Preprint.
[7] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. SIAM Journal
on Scientific Computing, 20(1):33?61, 1998.
[8] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, New York, 1991.
[9] D. Donoho. Compressed sensing. 2004. Preprint.
[10] D. Donoho and Y. Tsaig. Extensions of compressed sensing. 2004. Preprint.
[11] M. F. Duarte, S. Sarvotham, D. Baron, M. B. Wakin, and R. G. Baraniuk. Distributed compressed sensing of jointly sparse signals. In Asilomar Conf. Signals, Sys., Comput., Nov. 2005.
[12] J. Haupt and R. Nowak. Signal reconstruction from noisy random projections. 2005. Preprint.
[13] S. Pradhan and K. Ramchandran. Distributed source coding using syndromes (DISCUS): Design and construction. IEEE Trans. Inform. Theory, 49:626?643, March 2003.
[14] D. Slepian and J. K. Wolf. Noiseless coding of correlated information sources. IEEE Trans.
Inform. Theory, 19:471?480, July 1973.
[15] J. Tropp and A. C. Gilbert. Signal recovery from partial information via orthogonal matching
pursuit. 2005. Preprint.
[16] J. Tropp, A. C. Gilbert, and M. J. Strauss. Simulataneous sparse approximation via greedy
pursuit. In IEEE 2005 Int. Conf. Acoustics, Speech, Signal Processing, March 2005.
[17] Z. Xiong, A. Liveris, and S. Cheng. Distributed source coding for sensor networks. IEEE Signal
Proc. Mag., 21:80?94, September 2004.
| 2754 |@word version:1 achievable:4 compression:5 norm:1 seems:1 seek:2 sensed:1 simulation:3 decomposition:1 dramatic:1 solid:1 reduction:1 initial:2 contains:2 exclusively:1 mag:1 recovered:7 z2:1 yet:5 must:2 numerical:1 subsequent:1 informative:1 enables:3 cheap:1 remove:1 designed:1 plot:2 greedy:6 fewer:3 device:3 sys:1 ck2:1 provides:4 node:3 location:2 contribute:1 compressible:1 allerton:1 simpler:1 constructed:2 consists:1 combine:2 comb:1 introduce:1 theoretically:3 inter:3 roughly:1 cand:6 behavior:1 colspan:1 multi:2 inspired:2 becomes:2 estimating:4 notation:1 suffice:6 formalizing:1 moreover:1 mitigated:1 lowest:1 substantially:3 emerging:3 dror:1 developed:1 proposing:1 temporal:1 every:1 attenuation:1 preferable:1 revelation:2 k2:4 partitioning:1 control:2 converse:4 appear:1 local:1 limit:1 encoding:2 meet:2 incoherence:3 might:1 dantzig:1 suggests:2 challenging:1 range:1 unique:1 practical:4 camera:2 yj:7 acknowledgment:1 practice:1 atomic:1 universal:2 significantly:2 projection:12 matching:3 onto:2 close:1 cannot:2 yej:2 romberg:1 www:1 equivalent:1 conventional:1 gilbert:2 center:1 independently:2 mislead:1 formalized:1 recovery:19 immediately:1 pure:1 rule:1 qtj:1 array:4 orthonormal:2 notion:2 variation:1 justification:1 increment:1 limiting:1 construction:1 suppose:1 decode:1 exact:3 programming:2 us:2 element:4 trend:1 particularly:2 larized:1 sparsely:1 observed:4 preprint:8 solved:1 calculate:1 region:1 sun:1 decrease:2 removed:1 counter:1 mentioned:1 intuition:1 comm:2 battery:2 solving:2 tight:1 algebra:1 upon:1 localization:1 basis:21 joint:11 k0:6 represented:1 describe:2 kp:1 saunders:1 vations:1 whose:1 larger:1 solve:1 reconstruct:9 compressed:9 encoder:2 otherwise:1 jointly:10 highlighted:1 itself:1 noisy:1 advantage:1 analytical:1 reconstruction:23 propose:5 product:2 j2:2 requirement:1 perfect:4 help:3 amazing:1 qt:1 minor:1 strong:1 recovering:2 c:20 involves:1 indicate:1 differ:1 correct:2 enable:1 require:4 fix:3 alleviate:1 underdetermined:1 adjusted:1 extension:2 correction:1 marco:1 hold:2 sufficiently:2 slepian:3 bj:5 commonality:1 purpose:1 estimation:3 proc:1 applicable:1 combinatorial:2 individually:2 agrees:1 largest:1 tool:1 minimization:2 rough:1 clearly:1 sensor:31 gaussian:4 modified:2 rather:1 ck:2 avoid:1 encode:1 focus:1 dsp:1 check:1 indicates:1 sense:1 duarte:5 inaccurate:1 bt:1 comprising:1 tao:4 provably:1 arg:2 classification:1 overall:1 among:1 priori:1 animal:1 spatial:2 prevailing:1 initialize:1 field:3 construct:2 once:1 having:1 saving:2 sampling:1 identical:2 anticipated:2 inevitable:1 future:1 np:1 others:1 richard:1 few:3 randomly:1 simultaneously:1 individual:5 phase:1 consisting:1 centralized:1 highly:2 intra:3 joel:1 operated:1 jsm:26 implication:1 accurate:3 nowak:1 neglecting:1 necessary:1 experience:1 partial:1 orthogonal:4 machinery:2 incomplete:1 theoretical:6 column:4 modeling:1 compelling:2 cover:1 measuring:1 cost:3 entry:6 holographic:1 successful:2 mimo:1 characterize:4 varies:1 corrupted:1 combined:1 thanks:2 siam:1 decoding:1 michael:1 together:1 again:1 ambiguity:1 recorded:1 reflect:1 conf:3 return:1 account:1 coding:7 availability:1 coefficient:13 int:1 caused:4 later:1 wind:1 observing:1 portion:1 red:1 recover:8 complicated:1 parallel:1 contribution:2 baron:4 who:1 ensemble:15 yield:1 generalize:1 identification:1 thumb:1 simultaneous:1 inform:2 against:2 failure:1 acquisition:1 transmits:1 recovers:2 gain:2 sampled:1 knowledge:1 conversation:1 carefully:1 sophisticated:1 appears:1 higher:1 day:1 permitted:1 wherein:1 improved:1 though:2 just:1 correlation:7 tropp:3 propagation:1 simulataneous:1 quality:1 scientific:1 grows:1 believe:2 name:1 effect:2 ye:1 concept:2 true:1 requiring:1 normalized:1 contain:1 hence:3 alternating:1 nonzero:4 iteratively:1 nonsparse:5 uniquely:1 criterion:1 complete:1 temperature:2 image:2 invoked:1 common:24 empirically:3 overview:1 imagined:1 discussed:1 extend:1 measurement:69 significant:1 collaborate:1 mathematics:1 stable:1 longer:1 base:1 showed:1 dictated:2 moderate:2 scenario:3 certain:1 arbitrarily:1 success:2 life:1 exploited:1 preserving:1 minimum:1 additional:1 somewhat:1 greater:2 omp:6 employed:2 syndrome:1 signal:122 ii:1 dashed:1 multiple:2 july:1 rj:1 compensate:1 variant:1 enhancing:1 vision:1 noiseless:1 iteration:4 background:1 remarkably:1 separately:1 addition:1 source:9 crucial:1 rest:2 operate:2 recording:1 tend:2 mature:1 contrary:1 near:1 presence:2 iii:1 identically:1 enough:3 variety:1 xj:18 iterate:1 approaching:3 identified:1 perfectly:2 reduce:3 inner:2 prototype:1 shift:1 qj:3 speech:1 proceed:1 york:1 useful:3 listed:1 inspiring:1 reduced:1 generate:1 zj:14 oversampling:1 estimated:4 per:18 correctly:1 blue:1 group:3 key:1 drawn:1 pj:1 groundbreaking:1 nonadaptive:1 asymptotically:1 merely:1 graph:1 sum:1 run:3 prob:1 baraniuk:4 communicate:1 extends:2 throughout:1 decision:1 submatrix:1 entirely:1 bound:6 cheng:1 refine:3 oracle:1 precisely:1 bp:2 x2:4 encodes:1 argument:1 min:2 extremely:1 span:3 separable:3 department:5 structured:2 richb:1 march:2 pink:1 across:1 slightly:1 reconstructing:2 lunch:1 making:1 projecting:1 taken:1 asilomar:1 ln:1 equation:1 turn:1 count:1 discus:1 know:1 letting:2 tractable:2 pursuit:7 available:2 permit:1 apply:1 indirectly:1 save:1 xiong:1 encounter:1 thomas:1 compress:1 denotes:2 remaining:1 wakin:5 log2:1 exploit:3 k1:6 build:1 establish:2 emmanuel:1 realized:1 strategy:2 traditional:3 losslessly:1 exhibit:1 september:1 subspace:1 separate:2 concatenation:1 decoder:2 consumption:1 topic:1 water:1 assuming:2 length:6 modeled:3 index:2 acquire:3 innovation:30 setup:1 unfortunately:2 difficult:2 statement:2 relate:1 expense:1 design:1 motivates:1 perform:1 allowing:1 upper:1 zf:6 observation:3 snapshot:1 situation:4 communication:5 dc:13 rn:4 frame:3 arbitrary:1 required:12 z1:2 tsaig:1 acoustic:2 established:2 trans:2 below:2 sparsity:13 reading:1 program:1 including:1 memory:2 video:2 power:1 critical:2 ation:1 rely:1 residual:1 scheme:1 lossless:1 numerous:1 incoherent:9 sept:1 kj:7 determining:1 relative:2 plant:1 expect:2 fully:1 haupt:1 localized:1 verification:1 principle:2 share:2 collaboration:1 production:2 row:4 supported:1 free:1 transpose:1 zc:28 side:2 allow:3 sparse:56 distributed:18 curve:1 collection:4 commonly:1 projected:1 clue:1 far:1 employing:1 reconstructed:1 nov:1 selector:1 informationtheoretic:1 unreliable:1 global:1 pseudoinverse:1 quoted:1 search:1 iterative:2 promising:2 mj:17 expansion:2 necessarily:1 ybj:3 motivation:1 noise:1 x1:5 fig:6 aid:1 wiley:1 position:1 sparsest:2 comput:2 outdoor:1 plement:1 theorem:5 remained:1 choi:1 shade:1 specific:2 sensing:11 r2:1 insignificant:1 fusion:1 exists:3 strauss:1 execution:2 ramchandran:1 chen:1 surprise:1 subtract:2 entropy:2 smoothly:1 tracking:1 applies:1 wolf:3 extracted:1 rice:11 sarvotham:4 succeed:1 conditional:2 donoho:4 consequently:1 shared:1 feasible:1 hard:1 change:1 onestep:1 typical:2 specifically:1 uniformly:1 preset:1 total:4 ece:5 e:6 support:14 ongoing:1 zbj:1 phenomenon:1 correlated:6 |
1,932 | 2,755 | From Batch to Transductive Online Learning
Sham Kakade
Toyota Technological Institute
Chicago, IL 60637
[email protected]
Adam Tauman Kalai
Toyota Technological Institute
Chicago, IL 60637
[email protected]
Abstract
It is well-known that everything that is learnable in the difficult online
setting, where an arbitrary sequences of examples must be labeled one at
a time, is also learnable in the batch setting, where examples are drawn
independently from a distribution. We show a result in the opposite direction. We give an efficient conversion algorithm from batch to online
that is transductive: it uses future unlabeled data. This demonstrates the
equivalence between what is properly and efficiently learnable in a batch
model and a transductive online model.
1 Introduction
There are many striking similarities between results in the standard batch learning setting,
where labeled examples are assumed to be drawn independently from some distribution,
and the more difficult online setting, where labeled examples arrive in an arbitrary sequence. Moreover, there are simple procedures that convert any online learning algorithm
to an equally good batch learning algorithm [8]. This paper gives a procedure going in the
opposite direction.
It is well-known that the online setting is strictly harder than the batch setting, even for
the simple one-dimensioanl class of threshold functions on the interval [0, 1]. Hence, we
consider the online transductive model of Ben-David, Kushilevitz, and Mansour [2]. In
this model, an arbitrary but unknown sequence of n examples (x1 , y1 ), . . . , (xn , yn ) ?
X ?{?1, 1} is fixed in advance, for some instance space X . The set of unlabeled examples
is then presented to the learner, ? = {xi |1 ? i ? n}. The examples are then revealed, in an
online manner, to the learner, for i = 1, 2, . . . , n. The learner observes example xi (along
with all previous labeled examples (x1 , y1 ), . . . , (xi?1 , yi?1 ) and the unlabeled example
set ?) and must predict yi . The true label yi is then revealed to the learner. After this
occurs, the learner compares its number of mistakes to the minimum number of mistakes of
any of a target class F of functions f : X ? {?1, 1} (such as linear threshold functions).
Note that our results are in this type of agnostic model [7], where we allow for arbitrary
labels, unlike the realizable setting, i.e., noiseless or PAC models, where it is assumed that
the labels are consistent with some f ? F.
With this simple transductive knowledge of what unlabeled examples are to come, one
can use existing expert algorithms to inefficiently learn any class of finite VC dimension,
similar to the batch setting. How does one use unlabeled examples efficiently to guarantee
good online performance?
Our efficient algorithm A2 converts a proper1 batch algorithm to a proper online algorithm
(both in the agnostic setting). At any point in time, it has observed some labeled examples.
It then ?hallucinates? random examples by taking some number of unlabeled examples and
labeling them randomly. It appends these examples to those observed so far and predicts
according to the batch algorithm that finds the hypothesis of minimum empirical error on
the combined data.
The idea of ?hallucinating? and optimizing has been used for designing efficient online
algorithms [6, 5, 1, 10, 4] in situations where exponential weighting schemes were inefficient. The hallucination analogy was suggested by Blum and Hartline [4]. In the context
of transductive learning, it seems to be a natural way to try to use the unlabeled examples
in conjunction with a batch learner. Let #mistakes(f, ?n ) denote the number of mistakes
of a function f ? F on a particular sequence ?n ? (X ? {?1, 1})n , and #mistakes(A, ?n )
denote the same quantity for a transductive online learning algorithm A. Our main theorem
is the following.
Theorem 1. Let F be a class of functions f : X ? {?1, 1} of VC dimension d. There
is an efficient randomized transductive online algorithm that, for any n > 1 and ?n ?
(X ? {?1, 1})n ,
p
E[#mistakes(A2 , ?n )] ? minf ?F #mistakes(f, ?n ) + 2.5n3/4 d log n.
The algorithm is computationally efficient in the sense that it runs in time poly(n), given
an efficient proper batch learning algorithm.
One should
p note that the bound on the error rate is the same as that of the best f ? F plus
O(n?1/4 d log(n)), approaching 0 at a rate related to the standard VC bound.
It is well-known that, without regard to computational efficiency, the learnable classes of
functions are exactly those with finite VC dimension. Consequently, the classes of functions learnable in the batch and transductive online settings are the same. The classes of
functions properly learnable by computationally efficient algorithms in the proper batch
and transductive online settings are identical, as well.
In addition to the new algorithm, this is interesting because it helps justify a long line of
work suggesting that whatever can be done in a batch setting can also be done online.
Our result is surprising in light of earlier work by Blum showing that a slightly different
online model is harder than its batch analog for computational reasons and not informationtheoretic reasons [3].
In Section 2, we define the transductive online model. In Section 3, we analyze the easier
case of data that is realizable with respect to some function class, i.e., when there is some
function of zero error in the class. In Section 4, we present and analyze the hallucination
algorithm. In Section 5, we discuss open problems such as extending the results to improper
learning and the efficient realizable case.
2 Models and definitions
The transductive online model considered by Ben-David, Kushlevitz, and Mansour [2],
consists of an instance space X and label set Y which we will always take to be binary Y = {?1, 1}. An arbitrary n > 0 and arbitrary sequence of labeled examples
(x1 , y1 ), . . . , (xn , yn ) is fixed. One can think of these as being chosen by an adversary
who knows the (possibly randomized) learning algorithm but not the realization of its random coin flips. For notational convenience, we define ?i to be the subsequence of first i
1
A proper learning algorithm is one that always outputs a hypothesis h ? F .
labeled examples,
?i = (x1 , y1 ), (x2 , y2 ), . . . , (xi , yi ),
and ? to be the set of all unlabeled examples in ?n ,
? = {xi | i ? {1, 2, . . . , n}}.
A transductive online learner A is a function that takes as input n (the number of examples
to be predicted), ? ? X (the set of unlabeled examples, |?| ? n), xi ? ? (the example
to be tested), and ?i?1 ? (? ? Y)i?1 (the previous i ? 1 labeled examples) and outputs a
prediction ? Y of yi , for any 1 ? i ? n. The number of mistakes of A on the sequence
?n = (x1 , y1 ), . . . , (xn , yn ) is,
#mistakes(A, ?n ) = |{i | A(n, ?, xi , ?i?1 ) 6= yi }|.
If A is computed by a randomized algorithm, then we similarly define E[#mistakes(A, ?n )]
where the expectation is taken over the random coin flips of A. In order to speak of the
learnability of a set F of functions f : X ? Y, we define
#mistakes(f, ?n ) = |{i | f (xi ) 6= yi }|.
Formally, paralleling agnostic learning [7],2 we define an efficient transductive online
learner A for class F to be one for which the learning algorithm runs in time poly(n)
and achieves, for any ? > 0,
E[#mistakes(A, ?n )] ? minf ?F #mistakes(f, ?n ) + ?n,
for n =poly(1/?).3
2.1 Proper learning
Proper batch learning requires one to output a hypothesis h ? F. An efficient proper
batch learning algorithm for F is a batch learning algorithm B that, given any ? > 0, with
n = poly(1/?) many examples from any distribution D, outputs an h ? F of expected
error E[PrD [h(x) 6= y]] ? minf ?F PrD [f (x) 6= y] + ? and runs in time poly(n).
Observation 1. Any efficient proper batch learning algorithm B can be converted into an
efficient empirical error minimizer M that, for any n, given any data set ?n ? (X ? Y)n ,
outputs an f ? F of minimal empirical error on ?n .
Proof. Running B only on ?n , B is not guaranteed to output a hypothesis of minimum
empirical error. Instead, we set an error tolerance of B to ? = 1/(4n), and give it examples
drawn uniformly from the distribution D which is uniform over the data ?n (a type of
bootstrap). If B indeed returns a hypothesis h of error less than 1/n more than the best
f ? F, it must be a hypothesis of minimum empirical error on ?n . By Markov?s inequality,
with probability at most 1/4, the generalization error is more than 1/n. By repeating
several times and take the best hypothesis, we get a success probability exponentially close
to 1. The runtime is polynomial in n.
To define proper learning in an online setting, it is helpful to think of the following alternative definition of transductive online learning. In this variation, the learner must output
a sequence of hypotheses h1 , h2 , . . . , hn : X ? {?1, 1}. After the ith hypothesis hi is
output, the example (xi , yi ) is revealed, and it is clear whether the learner made an error.
Formally, the (possibly randomized) algorithm A0 still takes as input n, ?, and ?i?1 (but
2
It is more common in online learning to bound the total number of mistakes of an online algorithm on an arbitrary sequence. We bound its error rate, as is usual for batch learning.
3
The results in this paper could be replaced by high-probability 1 ? ? bounds at a cost of log 1/?.
no longer xi ), and outputs hi : X ? {?1, 1} and errs if hi (xi ) 6= yi . To see that this
model is equivalent to the previous definition, note that any algorithm A0 that outputs hypotheses hi can be used to make predictions hi (xi ) on example i (it errs if hi (xi ) 6= yi ).
It is equally true but less obvious than any algorithm A in the previous model can be converted to an algorithm A0 in this model. This is because A0 can be viewed as outputting
hi : X ? {?1, 1}, where the function hi is defined by setting hi (x) equal to be the prediction of algorithm A on the sequence ?i?1 followed by the example x, for each x ? X , i.e.,
hi (x) = A(n, ?, x, ?i?1 ). (The same coins can be used if A and A0 are randomized.) A
(possibly randomized) transductive online algorithm in this model is defined to be proper
for family of functions F if it always outputs hi ? F .
3 Warmup: the realizable case
In this section, we consider the realizable special case in which there is some f ? F which
correctly labels all examples. In particular, this means that we only consider sequences
?n for which there is an f ? F with #mistakes(f, ?n ) = 0. This case will be helpful to
analyze first as it is easier.
Fix arbitrary n > 0 and ? = {x1 , x2 , . . . , xn } ? X , |?| ? n. Say there are at most L
different ways to label the examples in ? according to functions f ? F, so 1 ? L ? 2|?| .
In the transductive online model, L is determined by ? and F only. Hence, as long as
prediction occurs only on examples x ? ?, there are effectively only L different functions
in F that matter, and we can thus pick L such functions that give rise to the L different
labelings. On the ith example, one could simply take majority vote of fj (xi ) over consistent
labelings fj (the so-called halving algorithm), and this would easily ensure at most log2 (L)
mistakes, because each mistake eliminates at least half of the consistent labelings. One can
also use the following proper learning algorithm.
Proper transductive online learning algorithm in the realizable case:
? Preprocessing: Given the set of unlabeled examples ?, take L functions f1 , f2 , . . . , fL ? F that give rise to the L different labelings
of x ? ?.4
? ith prediction: Output a uniformly random function f from the fj
consistent with ?i?1 .
The above algorithm, while possibly very inefficient, is easy to analyze.
Theorem 2. Fix a class of binary functions F of VC dimension d. The above randomized proper learning algorithm makes an expected d log(n) mistakes on any sequence of
examples of length n ? 2, provided that there is some mistake-free f ? F.
Proof. Let Vi be the number of labelings fj consistent with the first i examples, so that
L = V0 ? V1 ? ? ? ? ? Vn ? 1 and L ? nd , by Sauer?s lemma [11] for n ? 2, where
d is the VC dimension of F. Observe that the number of consistent labelings that make
a mistake on the ith example are exactly Vi?1 ? Vi . Hence, the total expected number of
mistakes is,
? X
Vn
n ?
n
X
X
1
1
Vi?1 ? Vi
1
1
?
+
+ ...
? log(L).
?
V
V
V
?
1
V
+
i
1
i?1
i?1
i?1
i
i=1
i=1
i=2
4
More formally, take L functions with the following properties: for each pair 1 ? j, k ? L with
j 6= k, there exists x ? ? such that fj (x) 6= fk (x), and for every f ? F , there exists a 1 ? j ? L
with f (x) = fj (x) for all x ? ?.
Hence the above algorithm achieves an error rate of O(d log(n)/n), which quickly approaches zero for large n. Note that, this closely matches what one achieves in the batch
setting. Like the batch setting, no better bounds can be given up to a constant factor.
4 General setting
We now consider the more difficult unrealizable setting where we have an unconstrained
sequence of examples (though we still work in a transductive setting). We begin by presenting an known (inefficnet) extension to the halving algorithm of the previous section,
that works in the agnostic (unrealizable) setting that is similar to the previous algorithm.
Inefficient proper transductive online learning algorithm A1 :
? Preprocessing: Given the set of unlabeled examples ?, take L functions f1 , f2 , . . . , fL that give rise to the L different labelings of
x ? ?. Assign an initial weight w1 = w2 = . . . = wL = 1 to
each function.
wj
.
? Output fj , where 1 ? j ? L is chosen with probability w1 +...+w
L
? Update: for
j for which
fj (xi ) 6= yi , reduce wj ,
?
? eachq
wj := wj 1 ? logn L .
Using an analysis very similar to that of Weighted Majority [9], one can show that, for any
n > 1 and sequence of examples ?n ? (X ? {?1, 1})n ,
p
E[#mistakes(A1 , ?n )] = minf ?F #mistakes(f, ?n ) + 2 dn log n,
where d is the VC dimension of F. Note the similarity to the standard VC bound.
4.1 Efficient algorithm
We can only hope to get an efficient proper online algorithm when there is an efficient
proper batch algorithm. As mentioned in section 2.1, this means that there is a batch
algorithm M that, given any data set, efficiently finds a hypothesis h ? F of minimum
empirical error. (In fact, most proper learning algorithms work this way to begin with.)
Using this, our efficient algorithm is as follows.
Efficient transductive online learning algorithm A2 :
? Preprocessing: Given the set of unlabeled examples ?, create a hallucinated data set ? as follows.
1. For each example
x ? ?,?choose integer rx uniformly at random
?
such that ? 4 n ? rx ? 4 n.
2. Add |rx | copies of the example x labeled by the sign of rx ,
(x, sgn(rx )), to ? .
? To predict on xi : output hypothesis M (? ?i?1 ) ? F, where ? ?i?1
is the concatenation of the hallucinated examples and the observed
labeled examples so far.
The current algorithm predicts f (xi ) based on f = M (? ?i?1 ). We first begin by analyzing
the hypothetical algorithm that used the function chosen on the next iteration, i.e. predict
f (xi ) based on f = M (? ?i ). (Of course, this is impossible to implement because we do
not know ?i when predicting f (xi ).)
Lemma 1. Fix any ? ? (X ? Y)? and ?n ? (X ? Y)n . Let A02 be the algorithm that, for
each i, predicts f (xi ) based on f ? F which is any empirical minimizer on the concatenated data ? ?i , i.e., f = M (? ?i ). Then the total number of mistakes of A02 is,
#mistakes(A02 , ?n ) ? minf ?F #mistakes(f, ? ?n ) ? minf ?F #mistakes(f, ? ).
It is instructive to first consider the case where ? is empty, i.e., there are no hallucinated
examples. Then, our algorithm that predicts according to M (?i?1 ) could be called ?follow
the leader,? as in [6]. The above lemma means that if one could use the hypothetical ?be
the leader? algorithm then one would make no more mistakes than the best f ? F . The
proof of this case is simple. Imagine starting with the offline algorithm that uses M (?n )
on each example x1 , . . . , xn . Now, on the first n ? 1 examples, replace the use of M (?n )
by M (?n?1 ). Since M (?n?1 ) is an error-minimizer on ?n?1 , this can only reduce the
number of mistakes. Next replace M (?n?1 ) by M (?n?2 ) on the first n ? 2 examples, and
so on. Eventually, we reach the hypothetical algorithm above, and we have only decreased
our number of mistakes. The proof of the above lemma follows along these lines.
Proof of Lemma 1. Fix empirical minimizers gi on ? ?i for i = 0, 1, . . . , n, i.e., gi =
M (? ?i ). For i ? 1, let mi be 1 if gi (xj ) 6= yj and 0 otherwise. We argue by induction on t that,
#mistakes(g0 , ? ) +
t
X
mi ? #mistakes of gt on ? ?t .
(1)
i=1
For t = 0, the two are trivially equal. Assuming it holds for t, we have,
#mistakes(g0 , ? ) +
t+1
X
mi
? #mistakes(gt , ? ?t ) + mt+1
i=1
? #mistakes(gt+1 , ? ?t ) + mt+1
= #mistakes(gt+1 , ? ?t+1 ).
The first inequality above holds by induction hypothesis, and the second follows from the
fact that gt is an empirical minimizer of ? ?t . The equality establishes (1) for t + 1 and thus
completesP
the induction. The total mistakes of the hypothetical algorithm proposed in the
n
lemma is i=1 mi , which gives the lemma by rearranging (1) for t = n.
Lemma 2. For any ?n ,
E? [minf ?F #mistakes(f, ? ?n )] ? E? [|? |/2] + minf ?F #mistakes(f, ?n ).
For any F of VC dimension d,
E? [minf ?F #mistakes(f, ? )] ? E? [|? |/2] ? 1.5n3/4
p
d log n.
Proof. For the first part of the lemma, let g = M (?n ) be an empirical minimizer on ?n .
Then,
E? [minf ?F #mistakes(f, ? ?n )] ? E? [#mistakes(g, ? ?n )] = E? [|? |/2]+#mistakes(g, ?n ).
The last inequality holds because, since each example in ? is equally likely to have a ?
label, the expected number of mistakes of any fixed g ? F on ? is E[|? |/2].
Fix any f ? F. For the second part of the lemma, observe that we can write the number of
mistakes of f on ? as,
Pn
|? | ? i=1 f (xi )ri
.
#mistakes(f, ? ) =
2
Hence it suffices to show that, maxf ?F
Pn
i=1
f (xi )ri ? 3n3/4
p
log(L).
Now Eri [f (xi )ri ] = 0 and |f (xi )ri | ? n1/4 . Next, Chernoff bounds (on the scaled ran2
dom
(xi )ri n?1/4 ) imply that, for any ? ? 1, with probability at most e?n? /2 ,
Pn variables f?1/4
? n?. Put another way, for any ? < n, with probability at most
i=1 f (xi )ri n
P
?n?3/2 ? 2 /2
e
,
f (xi )ri n?1/4 ? ?. As observed before, we can reduce the problem to
the L different labelings. In other words, we can assume
P that there are only L different functions. By the union bound, the probability that
f (xi )ri ? ? for any f ? F
?n?3/2 ? 2 /2
is at mostR Le
. Now the expectation
a non-negative random variable X is
Pof
?
n
E[X] = 0 Pr[X ? x]dx. Let X = maxf ?F i=1 f (xi )ri . In our case,
Z ?
p
?3/4 2
x /2
E[X] ? 2 log(L)n3/4 + ?
Le?n
dx
2 log(L)n3/4
p
p
By Mathematica, the above is at most 2 log(L)n3/4 + 1.254n3/4 ? 3 log(L)n3/4 .
Finally, we use the fact that L ? nd by Sauer?s lemma.
Unfortunately, we cannot use the algorithm A02 . However, due to the randomness we have
added, we can argue that algorithm A2 is quite close:
Lemma 3. For any ?n , for any i, with probability at least 1 ? n?1/4 over ? , M (? ?i?1 ) is
an empirical minimizer of ? ?i .
Proof. Define, F+ = {f ? F | f (xi ) = 1} and F? = {f ? F | f (xi ) = ?1}. WLOG,
we may assume that F+ and F? are both nonempty. For if not, i.e., if all f ? F predict
the same sign f (xi ), then the sets of empirical minimizers of ? ?i?1 and ? ?i are equal and
the lemma holds trivially. For any sequence ? ? (X ? Y)? , define,
s+ (?) = minf ?F+ #mistakes(f, ?) and s? (?) = minf ?F? #mistakes(f, ?).
Next observe that, if s+ (?) < s? (?) then M (?) ? F+ . Similarly if s? (?) < s+ (?) then
M (?) ? F? . If they are equal then f (xi ) can be an empirical minimizer in either. WLOG
let us say that the ith example is (xi , 1), i.e., it is labeled positively. This implies that
s+ (? ?i?1 ) = s+ (? ?i ) and s? (? ?i?1 ) = s? (? ?i ) + 1. It is now clear that if M (? ?i?1 ) is
not also an empirical minimizer of ? ?i then s+ (? ?i?1 ) = s? (? ?i?1 ).
Now the quantity ? = s+ (? ?i?1 )?s? (? ?i?1 ) is directly related to rxi , the signed random
number of times that example xi is hallucinated. If we fix ?n and the random choices rx
for each x ? ? \ {xi }, as we increase or decrease ri by 1, ? correspondingly increases or
decreases by 1. Since ri was chosen from a range of size 2bn1/4 c + 1 ? n1/4 , ? = 0 with
probability at most n?1/4 .
We are now ready to prove the main theorem.
Proof of Theorem 1. Combining Lemmas 1 and 2, if on each period i, we used any minimizer of empirical error on the data ? ?i , we
? would have a total number of mistakes of at
most minf ?F #mistakes(f, ?n ) + 1.5n3/4 d log n. Suppose A2 does end up using such
a minimizer on all but p periods. Then, its total number of mistakes can only be p larger
than this bound. By Lemma 3, the expected number p of periods i in which an empirical
minimizer of ? ?i is not used is ? n3/4 . Hence, the expected total number of mistakes of
A2 is at most,
p
E? [#mistakes(A2 , ?n )] ? minf ?F #mistakes(f, ?n ) + 1.5n3/4 d log n + n3/4 .
The above implies the theorem.
Remark 1. The above algorithm is still costly in the sense that we must re-run the batch
error minimizer for each prediction we would like to make. Using an idea quite similar to
the ?follow the lazy leader? algorithm in [6], we can achieve the same expected error while
only needing to call M with probability n?1/4 on each example.
Remark 2. The above analysis resembles previous analysis of hallucination algorithms.
However, unlike previous analyses, there is no exponential distribution in the hallucination
here yet the bounds still depend only logarithmically on the number of labelings.
5 Conclusions and open problems
We have given an algorithm for learning in the transductive online setting and established
several results between efficient proper batch and transductive online learnability. In the
realizable case, however, we have not given a computationally efficient algorithm. Hence,
it is an open question as to whether efficient learnability in the batch and transductive online settings are the same in the realizable case. In addition, our computationally efficient
algorithm requires polynomially more examples than its inefficient counterpart. It would
be nice to have the best of both worlds, namely ?
a computationally efficient algorithm that
achieves a number of mistakes that is at most O( dn log n). Additionally, it would be nice
to remove the restriction to proper algorithms.
Acknowledgements. We would like to thank Maria-Florina Balcan, Dean Foster, John
Langford, and David McAllester for helpful discussions.
References
[1] B. Awerbuch and R. Kleinberg. Adaptive routing with end-to-end feedback: Distributed learning
and geometric approaches. In Proc. of the 36th ACM Symposium on Theory of Computing, 2004.
[2] S. Ben-David, E. Kushilevitz, and Y. Mansour. Online learning versus offline learning. Machine
Learning 29:45-63, 1997.
[3] A. Blum. Separating Distribution-Free and Mistake-Bound Learning Models over the Boolean
Domain. SIAM Journal on Computing 23(5): 990-1000, 1994.
[4] A. Blum, J. Hartline. Near-Optimal Online Auctions. In Proceedings of the Proceedings of the
Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2005.
[5] J. Hannan. Approximation to Bayes Risk in Repeated Plays. In M. Dresher, A. Tucker, and
P. Wolfe editors, Contributions to the Theory of Games, Volume 3, p. 97-139, Princeton University Press, 1957.
[6] A. Kalai and S. Vempala. Efficient algorithms for the online decision problem. In Proceedings
of the 16th Conference on Computational Learning Theory, 2003.
[7] M. Kearns, R. Schapire, and L. Sellie. Toward Efficient Agnostic Learning. Machine Learning,
17(2/3):115?141, 1994.
[8] N. Littlestone. From On-Line to Batch Learning. In Proceedings of the 2nd Workshop on Computational Learning Theory, p. 269-284, 1989.
[9] N. Littlestone and M. Warmuth. The Weighted Majority Algorithm. Information and Computation, 108:212-261, 1994.
[10] H. Brendan McMahan and Avrim Blum. Online Geometric Optimization in the Bandit Setting
Against an Adaptive Adversary. In Proceedings of the 17th Annual Conference on Learning
Theory, COLT 2004.
[11] N. Sauer. On the Densities of Families of Sets. Journal of Combinatorial Theory, Series A, 13,
p 145-147, 1972.
[12] V. N. Vapnik. Estimation of Dependencies Based on Empirical Data, New York: Springer Verlag, 1982.
[13] V. N. Vapnik. Statistical Learning Theory, New York: Wiley Interscience, 1998.
| 2755 |@word polynomial:1 seems:1 nd:3 open:3 a02:4 pick:1 harder:2 initial:1 series:1 existing:1 current:1 surprising:1 yet:1 dx:2 must:5 john:1 chicago:2 remove:1 update:1 half:1 warmuth:1 ith:5 org:2 warmup:1 along:2 dn:2 symposium:2 consists:1 prove:1 interscience:1 manner:1 indeed:1 expected:7 provided:1 begin:3 moreover:1 pof:1 agnostic:5 what:3 guarantee:1 every:1 hypothetical:4 runtime:1 exactly:2 demonstrates:1 scaled:1 whatever:1 yn:3 before:1 mistake:56 analyzing:1 signed:1 plus:1 resembles:1 equivalence:1 range:1 yj:1 union:1 implement:1 bootstrap:1 procedure:2 empirical:17 word:1 get:2 convenience:1 unlabeled:12 close:2 cannot:1 put:1 context:1 impossible:1 risk:1 restriction:1 equivalent:1 dean:1 starting:1 independently:2 kushilevitz:2 variation:1 target:1 imagine:1 suppose:1 play:1 paralleling:1 speak:1 us:2 designing:1 hypothesis:13 logarithmically:1 wolfe:1 predicts:4 labeled:11 observed:4 wj:4 improper:1 decrease:2 technological:2 observes:1 mentioned:1 dom:1 depend:1 efficiency:1 learner:10 f2:2 easily:1 labeling:1 quite:2 larger:1 say:2 otherwise:1 gi:3 transductive:24 think:2 online:39 sequence:14 outputting:1 combining:1 realization:1 achieve:1 sixteenth:1 empty:1 extending:1 adam:1 tti:2 ben:3 help:1 bn1:1 predicted:1 come:1 implies:2 direction:2 closely:1 vc:9 sgn:1 mcallester:1 routing:1 everything:1 assign:1 fix:6 generalization:1 f1:2 suffices:1 unrealizable:2 strictly:1 extension:1 hold:4 considered:1 predict:4 achieves:4 a2:7 estimation:1 proc:1 label:7 combinatorial:1 wl:1 create:1 establishes:1 weighted:2 hope:1 always:3 kalai:3 pn:3 conjunction:1 properly:2 notational:1 maria:1 brendan:1 realizable:8 sense:2 helpful:3 minimizers:2 a0:5 bandit:1 going:1 labelings:9 colt:1 logn:1 special:1 equal:4 chernoff:1 identical:1 minf:14 future:1 randomly:1 replaced:1 n1:2 hallucination:4 light:1 sauer:3 littlestone:2 re:1 minimal:1 instance:2 earlier:1 boolean:1 rxi:1 cost:1 uniform:1 learnability:3 dependency:1 combined:1 density:1 randomized:7 siam:2 quickly:1 w1:2 hn:1 possibly:4 choose:1 expert:1 inefficient:4 return:1 suggesting:1 converted:2 matter:1 vi:5 try:1 h1:1 analyze:4 bayes:1 contribution:1 il:2 who:1 efficiently:3 rx:6 hartline:2 randomness:1 reach:1 definition:3 against:1 mathematica:1 tucker:1 obvious:1 proof:8 mi:4 appends:1 knowledge:1 follow:2 done:2 though:1 langford:1 true:2 y2:1 counterpart:1 awerbuch:1 equality:1 hence:7 game:1 presenting:1 fj:8 balcan:1 auction:1 common:1 mt:2 exponentially:1 volume:1 analog:1 unconstrained:1 fk:1 trivially:2 similarly:2 similarity:2 longer:1 v0:1 gt:5 add:1 optimizing:1 verlag:1 inequality:3 binary:2 success:1 errs:2 yi:11 minimum:5 period:3 hannan:1 sham:2 needing:1 match:1 long:2 equally:3 a1:2 prediction:6 halving:2 florina:1 noiseless:1 expectation:2 iteration:1 addition:2 interval:1 decreased:1 w2:1 eliminates:1 unlike:2 integer:1 call:1 near:1 revealed:3 easy:1 xj:1 approaching:1 opposite:2 reduce:3 idea:2 maxf:2 whether:2 hallucinating:1 york:2 remark:2 clear:2 repeating:1 schapire:1 sign:2 correctly:1 write:1 discrete:1 prd:2 sellie:1 threshold:2 blum:5 drawn:3 v1:1 convert:2 run:4 striking:1 soda:1 arrive:1 family:2 vn:2 decision:1 bound:12 hi:11 fl:2 guaranteed:1 followed:1 dresher:1 annual:2 n3:12 x2:2 ri:11 kleinberg:1 vempala:1 according:3 slightly:1 kakade:1 pr:1 taken:1 computationally:5 discus:1 eventually:1 nonempty:1 know:2 flip:2 end:3 observe:3 batch:29 coin:3 alternative:1 inefficiently:1 running:1 ensure:1 eri:1 log2:1 concatenated:1 g0:2 added:1 quantity:2 occurs:2 question:1 costly:1 usual:1 thank:1 separating:1 concatenation:1 majority:3 argue:2 reason:2 induction:3 toward:1 assuming:1 length:1 difficult:3 unfortunately:1 negative:1 rise:3 proper:19 unknown:1 conversion:1 observation:1 markov:1 finite:2 situation:1 y1:5 mansour:3 arbitrary:8 david:4 pair:1 namely:1 hallucinated:4 established:1 suggested:1 adversary:2 natural:1 predicting:1 scheme:1 imply:1 ready:1 nice:2 geometric:2 acknowledgement:1 interesting:1 analogy:1 versus:1 h2:1 consistent:6 foster:1 editor:1 course:1 last:1 free:2 copy:1 offline:2 allow:1 institute:2 taking:1 correspondingly:1 tauman:1 tolerance:1 regard:1 feedback:1 dimension:7 xn:5 world:1 distributed:1 made:1 adaptive:2 preprocessing:3 far:2 polynomially:1 informationtheoretic:1 assumed:2 xi:36 leader:3 subsequence:1 additionally:1 learn:1 rearranging:1 poly:5 domain:1 main:2 repeated:1 x1:7 positively:1 wiley:1 wlog:2 exponential:2 mcmahan:1 toyota:2 weighting:1 theorem:6 pac:1 showing:1 learnable:6 exists:2 workshop:1 avrim:1 vapnik:2 effectively:1 easier:2 simply:1 likely:1 lazy:1 springer:1 minimizer:12 acm:2 viewed:1 consequently:1 replace:2 determined:1 uniformly:3 justify:1 lemma:15 kearns:1 total:7 called:2 vote:1 formally:3 princeton:1 tested:1 instructive:1 |
1,933 | 2,756 | Consistency of one-class SVM and related
algorithms
R?egis Vert
Laboratoire de Recherche en Informatique
Universit?e Paris-Sud
91405, Orsay Cedex, France
Masagroup
24 Bd de l?H?opital
75005, Paris, France
[email protected]
Jean-Philippe Vert
Geostatistics Center
Ecole des Mines de Paris - ParisTech
77300 Fontainebleau, France
[email protected]
Abstract
We determine the asymptotic limit of the function computed by support
vector machines (SVM) and related algorithms that minimize a regularized empirical convex loss function in the reproducing kernel Hilbert
space of the Gaussian RBF kernel, in the situation where the number of
examples tends to infinity, the bandwidth of the Gaussian kernel tends
to 0, and the regularization parameter is held fixed. Non-asymptotic convergence bounds to this limit in the L2 sense are provided, together with
upper bounds on the classification error that is shown to converge to the
Bayes risk, therefore proving the Bayes-consistency of a variety of methods although the regularization term does not vanish. These results are
particularly relevant to the one-class SVM, for which the regularization
can not vanish by construction, and which is shown for the first time to
be a consistent density level set estimator.
1
Introduction
Given n i.i.d. copies (X1 , Y1 ), . . . , (Xn , Yn ) of a random variable (X, Y ) ? Rd ?{?1, 1},
we study in this paper the limit and consistency of learning algorithms that solve the following problem:
( n
)
1X
2
arg min
? (Yi f (Xi )) + ?k f kH? ,
(1)
n i=1
f ?H?
where ? : R ? R is a convex loss function and H? is the reproducing kernel Hilbert space
(RKHS) of the normalized Gaussian radial basis function kernel (denoted simply Gaussian
kernel below):
?k x ? x? k2
1
?
k? (x, x ) = ?
, ?>0.
(2)
d exp
2? 2
2??
This framework encompasses in particular the classical support vector machine (SVM) [1]
when ?(u) = max(1 ? u, 0). Recent years have witnessed important theoretical advances
aimed at understanding the behavior of such regularized algorithms when n tends to infinity
and ? decreases to 0. In particular the consistency and convergence rates of the two-class
SVM (see, e.g., [2, 3, 4] and references therein) have been studied in detail, as well as the
shape of the asymptotic decision function [5, 6]. All results published so far study the case
where ? decreases as the number of points tends to infinity (or, equivalently, where ?? ?d
converges to 0 if one uses the classical non-normalized version of the Gaussian kernel instead of (2)). Although it seems natural to reduce regularization as more and more training
data are available ?even more than natural, it is the spirit of regularization [7, 8]?, there
is at least one important situation where ? is typically held fixed: the one-class SVM [9].
In that case, the goal is to estimate an ?-quantile, that is, a subset of the input space X of
given probability ? with minimum volume. The estimation is performed by thresholding
the function output by the one-class SVM, that is, the SVM (1) with only positive examples; in that case ? is supposed to determine the quantile level1 . Although it is known that
the fraction of examples in the selected region converges to the desired quantile level ? [9],
it is still an open question whether the region converges towards a quantile, that is, a region
of minimum volume. Besides, most theoretical results about the consistency and convergence rates of two-class SVM with vanishing regularization constant do not translate to the
one-class case, as we are precisely in the seldom situation where the SVM is used with a
regularization term that does not vanish as the sample size increases.
The main contribution of this paper is to show that Bayes consistency can be obtained for
algorithms that solve (1) without decreasing ?, if instead the bandwidth ? of the Gaussian
kernel decreases at a suitable rate. We prove upper bounds on the convergence rate of the
classification error towards the Bayes risk for a variety of functions ? and of distributions P ,
in particular for SVM (Theorem 6). Moreover, we provide an explicit description of the
function asymptotically output by the algorithms, and establish converge rates towards this
limit for the L2 norm (Theorem 7). In particular, we show that the decision function output by the one-class SVM converges towards the density to be estimated, truncated at the
level 2? (Theorem 8); we finally show that this implies the consistency of one-class SVM
as a density level estimator for the excess-mass functional [10] (Theorem 9).
Due to lack of space we limit ourselves in this extended abstract to the statement of the main
results (Section 2) and sketch the proof of the main theorem (Theorem 3) that underlies all
other results in Section 3. All detailed proofs are available in the companion paper [11].
2
Notations and main results
Let (X, Y ) be a pair of random variables taking values in Rd ? {?1, 1}, with distribution P . We assume throughout this paper that the marginal distribution of X is absolutely
continuous with respect to Lebesgue measure with density ? : Rd ? R, and that is has
a support included in a compact set X ? Rd . We denote ? : Rd ? [0, 1] a measurable
version of the conditional distribution of Y = 1 given X.
The normalized Gaussian radial basis function (RBF) kernel k? with bandwidth parameter ? > 0 is defined for any (x, x? ) ? Rd ? Rd by:
1
?k x ? x? k2
k? (x, x? ) = ?
exp
,
d
2? 2
2??
and the corresponding reproducing kernel Hilbert space (RKHS) is denoted by H? . We
?
?d
note ?? =
2??
the normalizing constant that ensures that the kernel integrates to 1.
1
While the original formulation of the one-class SVM involves a parameter ?, there is asymptotically a one-to-one correspondance between ? and ?
Denoting by M the set of measurable real-valued functions on Rd , we define several risks
for functions f ? M:
? The classification error rate, usually refered to as (true) risk of f , when Y is
predicted by the sign of f (X), is denoted by
R (f ) = P (sign (f (X)) 6= Y ) .
? For a scalar ? > 0 fixed throughout this paper and a convex function ? : R ? R,
the ?-risk regularized by the RKHS norm is defined, for any ? > 0 and f ? H? ,
by
R?,? (f ) = EP [? (Y f (X))] + ?k f k2H?
Furthermore, for any real r ? 0, we denote by L (r) the Lipschitz constant of the
restriction of ? to the interval [?r, r]. For example, for the hinge loss ?(u) =
max(0, 1 ? u) one can take L(r) = 1, and for the squared hinge loss ?(u) =
max(0, 1 ? u)2 one can take L(r) = 2(r + 1).
? Finally, the L2 -norm regularized ?-risk is, for any f ? M:
R?,0 (f ) = EP [? (Y f (X))] + ?k f k2L2
where,
kf
k2L2
=
Z
Rd
f (x)2 dx ? [0, +?].
The minima of the three risk functionals defined above over their respective domains are
?
?
respectively. Each of these risks has an empirical counterand R?,0
denoted by R? , R?,?
part where the expectation with respect to P is replaced by an average over an i.i.d. sample T = {(X1 , Y1 ) , . . . , (Xn , Yn )}. In particular, the following empirical version of R?,?
will be used
n
X
b?,? (f ) = 1
?? > 0, f ? H? , R
? (Yi f (Xi )) + ?k f k2H? .
n i=1
The main focus of this paper is the analysis of learning algorithms that minimize the emb?,? , and their limit as the number of points
pirical ?-risk regularized by the RKHS norm R
tends to infinity and the kernel width ? decreases to 0 at a suitable rate when n tends
to ?, ? being kept fixed. Roughly speaking, our main result shows that in this situation,
b?,? asymptotically amounts to minimizif ? is a convex loss function, the minimization of R
b?,?
ing R?,0 . This stems from the fact that the empirical average term in the definition of R
converges to its corresponding expectation, while the norm in H? of a function f decreases
to its L2 norm when ? decreases to zero. To turn this intuition into a rigorous statement, we
need a few more assumptions about the minimizer of R?,0 and about P . First, we observe
that the minimizer of R?,0 is indeed well-defined and can often be explicitly computed:
Lemma 1 For any x ? Rd , let
f?,0 (x) = arg min ?(x) [?(x)?(?) + (1 ? ?)?(??)] + ??2 .
??R
Then f?,0 is measurable and satisfies:
R?,0 (f?,0 ) = inf R?,0 (f )
f ?M
Second, we provide below a general result that shows how to control the excess R?,0 -risk of
the empirical minimizer of the R?,? -risk, for which we need to recall the notion of modulus
of continuity [12].
Definition 2 (Modulus of continuity) Let f be a Lebesgue measurable function from Rd
to R. Then its modulus of continuity in the L1 -norm is defined for any ? ? 0 as follows
?(f, ?) =
sup
0?k t k??
k f (. + t) ? f (.) kL1 ,
(3)
where k t k is the Euclidian norm of t ? Rd .
Our main result can now be stated as follows:
Theorem 3 (Main Result) Let ?1 > ? > 0, 0 < p < 2, ? > 0, and let f??,? denote a
b?,? risk over H? . Assume that the marginal density ? is bounded, and
minimizer of the R
let M = supx?Rd ?(x). Then there exist constants (Ki )i=1...4 (depending only on p, ?, ?, d,
and M ) such that, for any x > 0, the following holds with probability greater than 1 ? e?x
over the draw of the training data:
4
! 2+p
r
2
[2+(2?p)(1+?)]d
2+p
2+p
?
?
(0)
1
1
?
?
?
R?,0 (f?,? ) ? R?,0 ? K1 L
?
?
n
!2
r
d
?? ? (0)
1
x
+ K2 L
(4)
?
?
n
?2
?12
+ K4 ?(f?,0 , ?1 ) .
+ K3
The first two terms in the r.h.s. of (4) bound the estimation error associated with the
gaussian RKHS, which naturally tends to be small when the number of training data
increases and when the RKHS is ?small?, i.e., when ? is large. As is usually the case in
such variance/bias splitings, the variance term here depends on the dimension d of the
input space. Note that it is also parametrized by both p and ?. The third term measures
the error due to penalizing the L2 -norm of a fixed function in H?1 by its k . kH? -norm,
with 0 < ? < ?1 . This is a price to pay to get a small estimation error. As for the fourth
term, it is a bound on the approximation error of the Gaussian RKHS. Note that, once ?
and ? have been fixed, ?1 remains a free variable parameterizing the bound itself.
In order to highlight the type of convergence rates one can obtain from Theorem 3, let us
assume that the ? loss function is Lipschitz on R (e.g., take the hinge loss), and suppose
that for some 0 ? ? ? 1, c1 > 0, and for any h ? 0, the function f?,0 satisfies the
following inequality
?(f?,0 , h) ? c1 h? .
(5)
Then we can optimize the right hand side of (4) w.r.t. ?1 , ?, p and ? by balancing the four
terms. This eventually leads to:
!
2?
4?+(2+?)d
??
1
?
,
(6)
R?,0 f??,? ? R?,0 = OP
n
for any ? > 0. This rate is achieved by choosing
2
4?+(2+?)d
? ??
1
?1 =
,
n
?(2+?)
2+?
4?+(2+?)d
? 2?
2+?
1
2
? = ?1 =
,
n
(7)
(8)
p = 2 and ? as small as possible (that is why an arbitray small quantity ? appears in the
rate).
b?,? risk for well-chosen width ? is a an algorithm
Theorem 3 shows that minimizing the R
consistant for the R?,0 -risk. In order to relate this consistency with more traditional measures of performance of learning algorithms, the next theorem shows that under a simple
additionnal condition on ?, R?,0 -risk-consistency implies Bayes consistency:
Theorem 4 If ? is convex, differentiable at 0, with ?? (0) < 0, then for every sequence of
functions (fi )i?1 ? M,
?
lim R?,0 (fi ) = R?,0
=?
i?+?
lim R (fi ) = R?
i?+?
This theorem results from a more general quantitative analysis of the relationship between
the excess R?,0 -risk and the excess R-risk, in the spirit of [13]. In order to state a refined
version in the particular case of the support vector machine algorithm, we first need the
following definition:
Definition 5 We say that a distribution P with ? as marginal density of X w.r.t. Lebesgue
2
measure has a low density exponent ? ? 0 if there exists (c2 , ?0 ) ? (0, +?) such that
?? ? [0, ?0 ], P x ? Rd : ?(x) ? ? ? c2 ?? .
We are now in position to state a quantitative relationship between the excess R?,0 -risk and
the excess R-risk in the case of support vector machines:
Theorem 6 Let ?1 (?) = max (1 ? ?, 0) be the hinge loss function, and ?2 (?) =
2
max (1 ? ?, 0) , be the squared hinge loss function. Then for any distribution P with
low density exponent ?, there exist constant (K1 , K2 , r1 , r2 ) ? (0, +?)4 such that for
any f ? M with an excess R?1 ,0 -risk upper bounded by r1 the following holds:
R(f ) ? R? ? K1 R?1 ,0 (f ) ? R?? 1 ,0
?
2?+1
,
and if the excess regularized R?2 ,0 -risk upper bounded by r2 the following holds:
R(f ) ? R? ? K2 R?2 ,0 (f ) ? R?? 2 ,0
?
2?+1
,
This result can be extended to any loss function through the introduction of variational
arguments, in the spirit of [13]; we do not further explore this direction, but the reader is
invited to consult [11] for more details. Hence we have proved the consistency of SVM,
together with upper bounds on the convergence rates, in a situation where the effect of
regularization does not vanish asymptotically.
Another consequence of the R?,0 -consistency of an algorithm is the L2 -convergence of the
function output by the algorithm to the minimizer of the R?,0 -risk:
Lemma 7 For any f ? M, the following holds:
k f ? f?,0 k2L2 ?
1
?
R?,0 (f ) ? R?,0
.
?
This result is particularly relevant to study algorithms whose objective are not binary classification. Consider for example the one-class SVM algorithm, which served as the initial
motivation for this paper. Then we claim the following:
Theorem 8 Let ?? denote the density truncated as follows:
(
?(x)
if ?(x) ? 2?,
2?
?? (x) =
1
otherwise.
(9)
Let f?? denote the function output by the one-class SVM, that is the function that solves (1)
in the case ? is the hinge-loss function and Yi = 1 for all i ? {1, . . . , n}. Then, under the
general conditions of Theorem 3, for ? choosen as in Equation (8),
lim k f?? ? ?? kL = 0 .
n?+?
2
An interesting by-product of this theorem is the consistency of the one-class SVM algorithm for density level set estimation:
Theorem 9 Let 0 < ? < 2? < M , let C? be the level set of the density function ? at
b? be the level set of 2?f?? at level ?, where f?? is still the function outptut by
level ?, and C
the one-class SVM. For any distribution Q, for any subset C of Rd , define the excess-mass
of C with respect to Q as follows:
HQ (C) = Q (C) ? ?Leb (C) ,
(10)
where Leb is the Lebesgue measure. Then, under the general assumptions of Theorem 3,
we have
b? = 0 ,
lim HP (C? ) ? HP C
(11)
n?+?
for ? choosen as in Equation (8).
The excess-mass functional was first introduced in [10] to assess the quality of density
level set estimators. It is maximized by the true density level set C? and acts as a risk
functional in the one-class framework. The proof ef Theorem 9 is based on the following
result: if ?? is a density estimator converging to the true density ? in the L2 sense, then
for any fixed 0 < ? < sup {?}, the excess mass of the level set of ?? at level ? converges
to the excess mass of C? . In other words, as is the case in the classification framework,
plug-in estimators built on L2 -consistent density estimators are consistent with respect to
the excess mass.
3
Proof of Theorem 3 (sketch)
In this section we sketch the proof of the main learning theorem of this contribution, which
underlies most other results stated in Section 2 The proof of Theorem 3 is based on the
b?,? , valid for
following decomposition
of the excess R?,0 -risk for the minimizer f??,? of R
?
any 0 < ? < 2?1 and any sample (xi , yi )i=1,...,n :
h
i
?
R?,0 (f??,? ) ? R?,0
= R?,0 f??,? ? R?,? f??,?
h
i
?
+ R?,? (f??,? ) ? R?,?
?
+ R?,?
? R?,? (k?1 ? f?,0 )
(12)
+ [R?,? (k?1 ? f?,0 ) ? R?,0 (k?1 ? f?,0 )]
?
+ R?,0 (k?1 ? f?,0 ) ? R?,0
.
It can be shown that k?1 ? f?,0 ? H?2?1 ? H? ? L2(Rd ) which justifies the introduction
of R?,? (k?1 ?f?,0 ) and R?,0 (k?1 ?f?,0 ). By studying the relationship between the Gaussian
RKHS norm and the L2 norm, it can be shown that
R?,0 f??,? ? R?,? f??,? = ? k f??,? k2 ? k f??,? k2
? 0,
L2
H?
?
while the following stems from the definition of R?,?
:
?
R?,?
? R?,? (k?1 ? f?,0 ) ? 0.
?
Hence, controlling R?,0 (f??,? )?R?,0
boils down to controlling each of the remaining three
terms in (12).
? The second term in (12) is usually referred to as the sample error or estimation
error. The control of such quantities has been the topic of much research recently,
including for example [14, 15, 16, 17, 18, 4]. Using estimates of local Rademacher
complexities through covering numbers for the Gaussian RKHS due to [4], the
following result can be shown:
b?,? Lemma 10 For any ? > 0 small enough, let f??,? be the minimizer of the R
risk on a sample of size n, where ? is a convex loss function. For any 0 < p <
2, ? > 0, and x ? 1, the following holds with probability at least 1 ? ex over the
draw of the sample:
4
! 2+p
r
2
2+p
[2+(2?p)(1+?)]d
2+p
?
?
(0)
1
1
?
R?,? (f??,? ) ? R?,? (f?,? ) ? K1 L
?
?
n
!2
r
d
?? ? (0)
1
x
+ K2 L
,
?
?
n
where K1 and K2 are positive constants depending neither on ?, nor on n.
? In order to upper bound the fourth term in (12), the analysis of the convergence of
the Gaussian RKHS norm towards the L2 norm when the bandwidth of the kernel
tends to 0 leads to:
R?,? (k?1 ? f?,0 ) ? R?,0 (k?1 ? f?,0 ) = k k?1 ? f?,0 k2H? ? k k?1 ? f?,0 k2L2
?2
k f?,0 k2L2
2?12
? (0) ? 2
.
?
2??12
?
? The fifth term in (12) corresponds to the approximation error. It can be shown that
for any bounded function in L1 (Rd ) and all ? > 0, the following holds:
?
(13)
k k? ? f ? f kL1 ? (1 + d)?(f, ?) ,
where ?(f, .) denotes the modulus of continuity of f in the L1 norm. From this
the following inequality can be derived:
R?,0 (k?1 ? f?,0 ) ? R?,0 (f?,0 )
?
? (2?k f?,0 kL? + L (k f?,0 kL? ) M ) 1 + d ? (f?,0 , ?1 ) .
4
Conclusion
We have shown that consistency of learning algorithms that minimize a regularized empirical risk can be obtained even when the so-called regularization term does not asymptotically vanish, and derived the consistency of one-class SVM as a density level set estimator.
Our method of proof is based on an unusual decomposition of the excess risk due to the
presence of the regularization term, which plays an important role in the determination of
the asymptotic limit of the function that minimizes the empirical risk. Although the upper
bounds on the convergence rates we obtain are not optimal, they provide a first step toward
the analysis of learning algorithms in this context.
Acknowledgments
The authors are grateful to St?ephane Boucheron, Pascal Massart and Ingo Steinwart for
fruitful discussions. This work was supported by the ACI ?Nouvelles interfaces des
Math?ematiques? of the French Ministry for Research, and by the IST Program of the European Community, under the Pascal Network of Excellence, IST-2002-506778.
References
[1] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers.
In Proceedings of the 5th annual ACM workshop on Computational Learning Theory, pages
144?152. ACM Press, 1992.
[2] I. Steinwart. Support vector machines are universally consistent. J. Complexity, 18:768?791,
2002.
[3] T. Zhang. Statistical behavior and consistency of classification methods based on convex risk
minimization. Ann. Stat., 32:56?134, 2004.
[4] I. Steinwart and C. Scovel. Fast rates for support vector machines using gaussian kernels.
Technical report, Los Alamos National Laboratory, 2004. submitted to Annals of Statistics.
[5] I. Steinwart. Sparseness of support vector machines. J. Mach. Learn. Res., 4:1071?1105, 2003.
[6] P. L. Bartlett and A. Tewari. Sparseness vs estimating conditional probabilities: Some asymptotic results. In Lecture Notes in Computer Science, volume 3120, pages 564?578. Springer,
2004.
[7] A.N. Tikhonov and V.Y. Arsenin. Solutions of ill-posed problems. W.H. Winston, Washington,
D.C., 1977.
[8] B. W. Silverman. On the estimation of a probability density function by the maximum penalized
likelihood method. Ann. Stat., 10:795?810, 1982.
[9] B. Sch?olkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating the
support of a high-dimensional distribution. Neural Comput., 13:1443?1471, 2001.
[10] J. A. Hartigan. Estimation of a convex density contour in two dimensions. J. Amer. Statist.
Assoc., 82(397):267?270, 1987.
[11] R. Vert and J.-P. Vert. Consistency and convergence rates of one-class svm and related algorithms. J. Mach. Learn. Res., 2006. To appear.
[12] R. A. DeVore and G. G. Lorentz. Constructive Approximation. Springer Grundlehren der
Mathematischen Wissenschaften. Springer Verlag, 1993.
[13] P.I. Bartlett, M.I. Jordan, and J.D. McAuliffe. Convexity, classification and risk bounds. Technical Report 638, UC Berkeley Statistics, 2003.
[14] A. B. Tsybakov. On nonparametric estimation of density level sets. Ann. Stat., 25:948?969,
June 1997.
[15] E. Mammen and A. Tsybakov. Smooth discrimination analysis. Ann. Stat., 27(6):1808?1829,
1999.
[16] P. Massart. Some applications of concentration inequalities to statistics. Ann. Fac. Sc. Toulouse,
IX(2):245?303, 2000.
[17] P. L. Bartlett, O. Bousquet, and S. Mendelson. Local rademacher complexities. Annals of
Statistics, 2005. To appear.
[18] V. Koltchinskii. Localized rademacher complexities. Manuscript, september 2003.
| 2756 |@word version:4 seems:1 norm:15 open:1 decomposition:2 euclidian:1 initial:1 ecole:1 rkhs:10 denoting:1 scovel:1 dx:1 bd:1 lorentz:1 shape:1 v:1 discrimination:1 selected:1 vanishing:1 recherche:1 math:1 zhang:1 c2:2 prove:1 excellence:1 indeed:1 roughly:1 behavior:2 nor:1 sud:1 decreasing:1 provided:1 estimating:2 notation:1 moreover:1 bounded:4 mass:6 minimizes:1 quantitative:2 every:1 berkeley:1 act:1 universit:1 k2:9 classifier:1 platt:1 control:2 assoc:1 pirical:1 yn:2 appear:2 mcauliffe:1 positive:2 local:2 tends:8 limit:7 consequence:1 k2l2:5 mach:2 therein:1 studied:1 koltchinskii:1 acknowledgment:1 silverman:1 empirical:7 vert:6 word:1 radial:2 get:1 risk:30 context:1 restriction:1 measurable:4 optimize:1 fruitful:1 center:1 leb:2 convex:8 estimator:7 parameterizing:1 proving:1 notion:1 annals:2 construction:1 suppose:1 arbitray:1 controlling:2 play:1 us:1 particularly:2 ep:2 role:1 region:3 ensures:1 decrease:6 intuition:1 convexity:1 complexity:4 mine:1 grateful:1 basis:2 informatique:1 fast:1 fac:1 sc:1 choosing:1 refined:1 jean:2 whose:1 posed:1 solve:2 valued:1 say:1 otherwise:1 toulouse:1 statistic:4 itself:1 sequence:1 differentiable:1 product:1 fr:2 relevant:2 translate:1 supposed:1 description:1 kh:2 olkopf:1 los:1 convergence:10 r1:2 rademacher:3 converges:6 depending:2 stat:4 op:1 solves:1 predicted:1 involves:1 implies:2 direction:1 hold:6 exp:2 k2h:3 k3:1 claim:1 estimation:8 integrates:1 minimization:2 gaussian:13 derived:2 focus:1 june:1 likelihood:1 lri:1 regis:1 rigorous:1 sense:2 typically:1 france:3 arg:2 classification:7 ill:1 pascal:2 denoted:4 exponent:2 uc:1 marginal:3 once:1 washington:1 report:2 ephane:1 few:1 national:1 replaced:1 ourselves:1 lebesgue:4 held:2 respective:1 taylor:1 desired:1 re:2 theoretical:2 witnessed:1 subset:2 kl1:2 alamo:1 ensmp:1 supx:1 st:1 density:20 together:2 squared:2 de:5 fontainebleau:1 explicitly:1 depends:1 performed:1 sup:2 bayes:5 contribution:2 minimize:3 correspondance:1 ass:1 variance:2 maximized:1 served:1 published:1 submitted:1 definition:5 naturally:1 proof:7 associated:1 boil:1 proved:1 recall:1 lim:4 hilbert:3 appears:1 manuscript:1 devore:1 formulation:1 amer:1 furthermore:1 smola:1 sketch:3 hand:1 steinwart:4 lack:1 continuity:4 french:1 quality:1 modulus:4 effect:1 normalized:3 true:3 regularization:10 hence:2 consistant:1 boucheron:1 laboratory:1 width:2 covering:1 mammen:1 l1:3 interface:1 variational:1 ef:1 fi:3 recently:1 functional:3 volume:3 mathematischen:1 rd:17 seldom:1 consistency:17 hp:2 shawe:1 recent:1 inf:1 tikhonov:1 verlag:1 inequality:3 binary:1 opital:1 yi:4 der:1 minimum:3 greater:1 ministry:1 determine:2 converge:2 egis:1 stem:2 ing:1 technical:2 smooth:1 determination:1 plug:1 converging:1 underlies:2 expectation:2 kernel:14 achieved:1 c1:2 interval:1 laboratoire:1 sch:1 invited:1 massart:2 cedex:1 grundlehren:1 spirit:3 jordan:1 consult:1 orsay:1 presence:1 enough:1 variety:2 bandwidth:4 reduce:1 whether:1 bartlett:3 speaking:1 tewari:1 detailed:1 aimed:1 amount:1 nonparametric:1 tsybakov:2 statist:1 exist:2 sign:2 estimated:1 ist:2 four:1 k4:1 hartigan:1 penalizing:1 neither:1 kept:1 asymptotically:5 fraction:1 year:1 fourth:2 throughout:2 reader:1 guyon:1 draw:2 decision:2 bound:10 ki:1 pay:1 winston:1 annual:1 infinity:4 precisely:1 bousquet:1 argument:1 min:2 nouvelles:1 refered:1 equation:2 remains:1 turn:1 eventually:1 unusual:1 studying:1 available:2 observe:1 ematiques:1 original:1 denotes:1 remaining:1 hinge:6 quantile:4 k1:5 establish:1 classical:2 objective:1 question:1 quantity:2 concentration:1 traditional:1 september:1 hq:1 parametrized:1 topic:1 toward:1 besides:1 relationship:3 minimizing:1 equivalently:1 statement:2 relate:1 stated:2 upper:7 ingo:1 philippe:2 truncated:2 situation:5 extended:2 y1:2 reproducing:3 community:1 introduced:1 pair:1 paris:3 kl:3 boser:1 geostatistics:1 below:2 usually:3 encompasses:1 program:1 built:1 max:5 including:1 suitable:2 natural:2 regularized:7 understanding:1 l2:12 kf:1 asymptotic:5 loss:12 lecture:1 highlight:1 interesting:1 localized:1 consistent:4 thresholding:1 balancing:1 arsenin:1 penalized:1 supported:1 copy:1 free:1 choosen:2 bias:1 side:1 emb:1 taking:1 fifth:1 dimension:2 xn:2 valid:1 contour:1 author:1 universally:1 far:1 functionals:1 excess:15 compact:1 xi:3 aci:1 continuous:1 why:1 learn:2 williamson:1 european:1 domain:1 wissenschaften:1 main:9 motivation:1 x1:2 referred:1 en:1 position:1 explicit:1 comput:1 vanish:5 third:1 ix:1 theorem:22 companion:1 down:1 r2:2 svm:21 normalizing:1 exists:1 workshop:1 mendelson:1 vapnik:1 justifies:1 sparseness:2 margin:1 simply:1 explore:1 scalar:1 springer:3 corresponds:1 minimizer:7 satisfies:2 acm:2 conditional:2 goal:1 ann:5 rbf:2 towards:5 lipschitz:2 price:1 paristech:1 included:1 lemma:3 called:1 support:9 absolutely:1 constructive:1 ex:1 |
1,934 | 2,757 | Generalized Nonnegative Matrix
Approximations with Bregman Divergences
Inderjit S. Dhillon
Suvrit Sra
Dept. of Computer Sciences
The Univ. of Texas at Austin
Austin, TX 78712.
{inderjit,suvrit}@cs.utexas.edu
Abstract
Nonnegative matrix approximation (NNMA) is a recent technique for dimensionality reduction and data analysis that yields a parts based, sparse
nonnegative representation for nonnegative input data. NNMA has found
a wide variety of applications, including text analysis, document clustering, face/image recognition, language modeling, speech processing and
many others. Despite these numerous applications, the algorithmic development for computing the NNMA factors has been relatively deficient. This paper makes algorithmic progress by modeling and solving
(using multiplicative updates) new generalized NNMA problems that
minimize Bregman divergences between the input matrix and its lowrank approximation. The multiplicative update formulae in the pioneering work by Lee and Seung [11] arise as a special case of our algorithms.
In addition, the paper shows how to use penalty functions for incorporating constraints other than nonnegativity into the problem. Further, some
interesting extensions to the use of ?link? functions for modeling nonlinear relationships are also discussed.
1
Introduction
Nonnegative matrix approximation (NNMA) is a method for dimensionality reduction and
data analysis that has gained favor over the past few years. NNMA has previously been
called positive matrix factorization [13] and nonnegative matrix factorization1 [12]. Assume that a1 , . . . , aN are N nonnegative input (M -dimensional) vectors. We organize
these vectors as the columns of a nonnegative data matrix
?
?
A , a1 a2 . . . a N .
NNMA seeks a small set of K nonnegative representative vectors b1 , . . . , bK that can be
nonnegatively (or conically) combined to approximate the input vectors ai . That is,
an ?
K
X
k=1
ckn bk ,
1 ? n ? N,
1
We use the word approximation instead of factorization to emphasize the inexactness of the
process since, the input A is approximated by BC.
where the combining coefficients
P ckn are restricted to be nonnegative. If ckn and bk are
unrestricted, and we minimize n kan ? Bcn k2 , the Truncated Singular Value Decomposition (TSVD) of A yields the optimal bk and ckn values. If the bk are unrestricted, but the
coefficient vectors cn are restricted to be indicator vectors, then we obtain the problem of
hard-clustering (See [16, Chapter 8] for related discussion regarding different constraints
on cn and bk ).
In this paper we consider problems where all involved matrices are nonnegative. For many
practical problems nonnegativity is a natural requirement. For example, color intensities,
chemical concentrations, frequency counts etc., are all nonnegative entities, and approximating their measurements by nonnegative representations leads to greater interpretability.
NNMA has found a significant number of applications, not only due to increased interpretability, but also because admitting only nonnegative combinations of the bk leads to
sparse representations.
This paper contributes to the algorithmic advancement of NNMA by generalizing the problem significantly, and by deriving efficient algorithms based on multiplicative updates for
the generalized problems. The scope of this paper is primarily on generic methods for
NNMA, rather than on specific applications. The multiplicative update formulae in the pioneering work by Lee and Seung [11] arise as a special case of our algorithms, which seek
to minimize Bregman divergences between the nonnegative input A and its approximation. In addition, we discuss the use penalty functions for incorporating constraints other
than nonnegativity into the problem. Further, we illustrate an interesting extension of our
algorithms for handling non-linear relationships through the use of ?link? functions.
2
Problems
Given a nonnegative matrix A as input, the classical NNMA problem is to approximate it
by a lower rank nonnegative matrix of the form BC, where B = [b1 , ..., bK ] and C =
[c1 , ..., cN ] are themselves nonnegative. That is, we seek the approximation,
A ? BC,
where B, C ? 0.
(2.1)
We judge the goodness of the approximation in (2.1) by using a general class of distortion
measures called Bregman divergences. For any strictly convex function ? : S ? R ? R
that has a continuous first derivative, the corresponding Bregman divergence D? : S ?
int(S) ? R+ is defined as D? (x, y) , ?(x) ? ?(y) ? ??(y)(x ? y), where int(S)
is the interior of set S [1, 2]. Bregman divergences are nonnegative, convex in the first
argument and zero if and only if x = y. These divergences play an important role in
convex optimizationP[2]. For the sequel we consider only separable Bregman divergences,
i.e., D? (X, Y ) = ij D? (xij , yij ). We further require xij , yij ? dom? ? R+ .
Formally, the resulting generalized nonnegative matrix approximation problems are:
min
D? (BC, A) + ?(B) + ?(C),
(2.2)
B, C?0
min
B, C?0
D? (A, BC) + ?(B) + ?(C).
(2.3)
The functions ? and ? serve as penalty functions, and they allow us to enforce regularization (or other constraints) on B and C. We consider both (2.2) and (2.3) since Bregman
divergences are generally asymmetric. Table 1 gives a small sample of NNMA problems
to illustrate the breadth of our formulation.
3
Algorithms
In this section we present algorithms that seek to optimize (2.2) and (2.3). Our algorithms
are iterative in nature, and are directly inspired by the efficient algorithms of Lee and Seung
[11]. Appealing properties include ease of implementation and computational efficiency.
Divergence D?
kA ? BCk2F
kA ? BCk2F
kW ? (A ? BC)k2F
KL(A, BC)
KL(A, W BC)
KL(A, BC)
D? (A, W1 BCW2 )
?
1 2
2x
1 2
2x
1 2
2x
x log x
x log x
x log x
?(x)
?
0
0
0
0
0
c1B T B1
?(B)
?
0
?1T C1
0
0
0
?c? kCk2F
?(C)
Remarks
Lee and Seung [11, 12]
Hoyer [10]
Paatero and Tapper [13]
Lee and Seung [11]
Guillamet et al. [9]
Feng et al. [8]
Weighted NNMA (new)
Table 1: Some example NNMA problems that may be obtained from (2.3). The corresponding asymmetric problem (2.2) has not been
P previously treated in the literature. KL(x, y)
denotes the generalized KL-Divergence = i xi log xyii ? xi + yi (also called I-divergence).
Note that the problems (2.2) and (2.3) are not jointly convex in B and C, so it is not easy
to obtain globally optimal solutions in polynomial time. Our iterative procedures start by
initializing B and C randomly or otherwise. Then, B and C are alternately updated until
there is no further appreciable change in the objective function value.
3.1
Algorithms for (2.2)
We utilize the concept of auxiliary functions [11] for our derivations. It is sufficient to
illustrate our methods using a single column of C (or row of B), since our divergences are
separable.
Definition 3.1 (Auxiliary function). A function G(c, c? ) is called an auxiliary function
for F (c) if:
1. G(c, c) = F (c), and
2. G(c, c? ) ? F (c) for all c? .
Auxiliary functions turn out to be useful due to the following lemma.
Lemma 3.2 (Iterative minimization). If G(c, c? ) is an auxiliary function for F (c), then
F is non-increasing under the update
ct+1 = argminc G(c, ct ).
Proof. F (ct+1 ) ? G(ct+1 , ct ) ? G(ct , ct ) = F (ct ).
As can be observed, the sequence formed by the iterative application of Lemma 3.2 leads to
a monotonic decrease in the objective function value F (c). For an algorithm that iteratively
updates c in its quest to minimize F (c), the method for proving convergence boils down to
the construction of an appropriate auxiliary function. Auxiliary functions have been used
in many places before, see for example [5, 11].
We now construct simple auxiliary functions for (2.2) that yield multiplicative updates. To
avoid clutter we drop the functions ? and ? from (2.2), noting that our methods can easily
be extended to incorporate these functions.
Suppose B is fixed and we wish to compute an updated column of C. We wish to minimize
F (c) = D? (Bc, a),
(3.1)
where a is the column of A corresponding to the column c of C. The lemma below shows
how to construct an auxiliary function for (3.1). For convenience of notation we use ? to
denote ?? for the rest of this section.
Lemma 3.3 (Auxiliary function). The function
? ?X
?
?
X
?
?
bij cj
?
?
?(ai ) + ?(ai ) (Bc)i ? ai ,
G(c, c ) =
?ij ?
?ij
i
ij
(3.2)
P
with ?ij = (bij c?j )/( l bil c?l ), is an auxiliary function for (3.1). Note that by definition
P
?
j ?ij = 1, and as both bij and cj are nonnegative, ?ij ? 0.
Proof. It is easy to verify that G(c, c) = F (c), since
P
?, we conclude that if j ?ij = 1 and ?ij ? 0, then
F (c) =
?
ij
j
?ij = 1. Using the convexity of
?
X ?X
?
?
?
bij cj ? ?(ai ) ? ?(ai ) (Bc)i ? ai
i
X
P
j
?
bij cj
?ij ?
?ij
= G(c, c? ).
?
?
?X
i
?
?
?(ai ) + ?(ai ) (Bc)i ? ai
?
To obtain the update, we minimize G(c, c? ) w.r.t. c. Let ?(x) denote the vector
[?(x1 ), . . . , ?(xn )]T . We compute the partial derivative
?
?
X
bip cp bip X
?G
?ip ?
?
bip ?(ai )
=
?cp
?ip ?ip
i
i
?
?
X
cp
?
=
bip ? ? (Bc )i ? (B T ?(a))p .
(3.3)
cp
i
We need to solve (3.3) for cp by setting ?G/?cp = 0. Solving this equation analytically
is not always possible. However, for a broad class of functions, we can obtain an analytic
solution. For example, if ? is multiplicative (i.e., ?(xy) = ?(x)?(y)) we obtain the
following iterative update relations for b and c (see [7])
? [?(aT )C T ] ?
p
,
[?(bT C)C T ]p
? [B T ?(a)] ?
p
cp ? cp ? ? ?1
.
[B T ?(Bc)]p
bp ? bp ? ? ?1
(3.4)
(3.5)
It turns out that when ? is a convex function of Legendre type, then ? ?1 can be obtained
by the derivative of the conjugate function ?? of ?, i.e., ? ?1 = ??? [14].
Note. (3.4) & (3.5) coincide with updates derived by Lee and Seung [11], if ?(x) = 21 x2 .
3.1.1
Examples of New NNMA Problems
We illustrate the power of our generic auxiliary functions given above for deriving algorithms with multiplicative updates for some specific interesting problems.
First we consider the problem that seeks to minimize the divergence,
KL(Bc, a) =
X
i
(Bc)i log
(Bc)i
? (Bc)i + ai ,
ai
B, c ? 0.
(3.6)
Let ?(x) = x log x ? x. Then, ?(x) = log x, and as ?(xy) = ?(x) + ?(y), upon
substituting in (3.3), and setting the resultant to zero we obtain
X
X
?G
=
bip log(cp (Bc? )i /c?p ) ?
bip log ai = 0,
?cp
i
i
cp
=? (B T 1)p log ? = [B T log a ? B T log(Bc? )]p
cp
?
?
? !
[B T log a/(Bc? ) ]p
?
=? cp = cp ? exp
.
[B T 1]p
The update for b can be derived similarly.
Constrained NNMA. Next we consider NNMA problems that have additional constraints.
We illustrate our ideas on a problem with linear constraints.
min D? (Bc, a)
x
s.t. P c ? 0,
c ? 0.
(3.7)
We can solve (3.7) problem using our method by making use of an appropriate (differentiable) penalty function that enforces P c ? 0. We consider,
F (c) = D? (Bc, a) + ?k max(0, P c)k2 ,
(3.8)
where ? > 0 is some penalty constant. Assuming multiplicative ? and following the
auxiliary function technique described above, we obtain the following updates for c,
? T
?
T
+
?1 [B ?(a)]k ? ?[P (P c) ]k
ck ? ck ? ?
,
[B T ?(Bc)]k
where (P c)+ = max(0, P c). Note that care must be taken to ensure that the addition of
this penalty term does not violate the nonnegativity of c, and to ensure that the argument
of ? ?1 lies in its domain.
Remarks. Incorporating additional constraints into (3.6) is however easier, since the exponential updates ensure nonnegativity. Given a = 1, with appropriate penalty functions,
our solution to (3.6) can be utilized for maximizing entropy of Bc subject to linear or
non-linear constraints on c.
Nonlinear models with ?link? functions. If A ? h(BC), where h is a ?link? function
that models a nonlinear relationship between A and the approximant BC, we may wish
to minimize D? (h(BC), A). We can easily extend our methods to handle this case for
appropriate h. Recall that the auxiliary function that we used, depended upon the convexity
of ?. Thus, if (? ?h) is a convex function, whose derivative ?(? ?h) is ?factorizable,? then
we can easily derive algorithms for this problem with link functions. We exclude explicit
examples for lack of space and refer the reader to [7] for further details.
3.2
Algorithms using KKT conditions
We now derive efficient multiplicative update relations for (2.3), and these updates turn out
to be simpler than those for (2.2). To avoid clutter, we describe our methods with ? ? 0,
and ? ? 0, noting that if ? and ? are differentiable, then it is easy to incorporate them in
our derivations. For convenience we use ?(x) to denote ?2 (x) for the rest of this section.
Using matrix algebra, one can show that the gradients of D? (A, BC) w.r.t. B and C are,
?
?
?B D? (A, BC) = ?(BC) ? (BC ? A) C T
?
?
?C D? (A, BC) =B T ?(BC) ? (BC ? A) ,
where ? denotes the elementwise or Hadamard product, and ? is applied elementwise to
BC. According to the KKT conditions, there exist Lagrange multiplier matrices ? ? 0
and ? ? 0 such that
?B D? (A, BC) = ?,
?mk bmk = ?kn ckn = 0.
?C D? (A, BC) = ?,
(3.9a)
(3.9b)
Writing out the gradient ?B D? (A, BC) elementwise, multiplying by bmk , and making
use of (3.9a,b), we obtain
??
?
?
?(BC) ? (BC ? A) C T mk bmk = ?mk bmk = 0,
which suggests the iterative scheme
bmk
??
?
?
?(BC) ? A C T mk
?
? .
? bmk ??
?(BC) ? BC C T mk
(3.10)
Proceeding in a similar fashion we obtain a similar iterative formula for ckn , which is
?
?
[B T ?(BC) ? A ]kn
? .
ckn ? ckn T ?
(3.11)
[B ?(BC) ? BC ]kn
3.2.1 Examples of New and Old NNMA Problems as Special Cases
We now illustrate the power of our approach by showing how one can easily obtain iterative
update relations for many NNMA problems, including known and new problems. For more
examples and further generalizations we refer the reader to [7].
Lee and Seung?s Algorithms. Let ? ? 0, ? ? 0. Now if we set ?(x) = 21 x2 or
?(x) = x log x, then (3.10) and (3.11) reduce to the Frobenius norm and KL-Divergence
update rules originally derived by Lee and Seung [11].
Elementwise
weighted distortion.
Here we wish to minimize kW ?(A?BC)k2F . Using
?
?
X ? W ? X, and A ? W ? A in (3.10) and (3.11) one obtains
B?B?
(W ? A)C T
,
(W ? (BC))C T
C?C?
B T (W ? A)
.
B T (W ? (BC))
These iterative updates are significantly simpler than the PMF algorithms of [13].
The Multifactor NNMA Problem (new). The above ideas can be extended to the multifactor NNMA problem that seeks to minimize the following divergence (see [7])
D? (A, B1 B2 . . . BR ),
where all matrices involved are nonnegative. A typical usage of multifactor NNMA problem would be to obtain a three-factor NNMA, namely A ? RBC. Such an approximation
is closely tied to the problem of co-clustering [3], and can be used to produce relaxed coclustering solutions [7].
Weighted NNMA Problem (new). We can follow the same derivation method as above
(based on KKT conditions) for obtaining multiplicative updates for the weighted NNMA
problem:
min D? (A, W1 BCW2 ),
where W1 and W2 are nonnegative (and nonsingular) weight matrices. The work of [9] is
a special case as mentioned in Table 1. Please refer to [7] for more details.
4
Experiments and Discussion
We have looked at generic algorithms for minimizing Bregman divergences between the
input and its approximation. One important question arises: Which Bregman divergence
should one use for a given problem? Consider the following factor analytic model
A = BC + N ,
where N represents some additive noise present in the measurements A, and the aim is to
recover B and C. If we assume that the noise is distributed according to some member
of the exponential family, then minimizing the corresponding Bregman divergence [1] is
appropriate. For e.g., if the noise is modeled as i.i.d. Gaussian noise, then the Frobenius
norm based problem is natural.
Another question is: Which version of the problem we should use, (2.2) or (2.3)? For
?(x) = 21 x2 , both problems coincide. For other ?, the choice between (2.2) and (2.3) can
be guided by computation issues or sparsity patterns of A. Clearly, further work is needed
for answering this question in more detail.
Some other open problems involve looking at the class of minimization problems to which
the iterative methods of Section 3.2 may be applied. For example, determining the class
of functions h, for which these methods may be used to minimize D? (A, h(BC)). Other
possible methods for solving both (2.2) and (2.3), such as the use of alternating projections
(AP) for NNMA, also merit a study.
Our methods for (2.2) decreased the objective function monotonically (by construction).
However, we did not demonstrate such a guarantee for the updates (3.10) & (3.11). Figure 1
offers encouraging empirical evidence in favor of a monotonic behavior of these updates.
It is still an open problem to formally prove this monotonic decrease. Preliminary results
that yield new monotonicity proofs for the Frobenius norm and KL-divergence NNMA
problems may be found in [7].
?(x) = ? log x
PMF Objective
3
?(x) = x log x ? x
28
19
26
2.9
18
24
17
2.7
2.6
2.5
22
Objective function value
Objective function value
Objective function value
2.8
20
18
16
16
15
14
14
2.4
13
12
2.3
2.2
12
10
0
10
20
30
40
50
60
Number of iterations
70
80
90
100
8
0
10
20
30
40
50
60
Number of iterations
70
80
90
100
11
0
10
20
30
40
50
60
Number of iterations
70
80
90
100
Figure 1: Objective function values over 100 iterations for different NNMA problems. The input
matrix A was random 20?8 nonnegative matrix. Matrices B and C were 20?4, 4?8, respectively.
NNMA has been used in a large number of applications, a fact that attests to its importance
and appeal. We believe that special cases of our generalized problems will prove to be
useful for applications in data mining and machine learning.
5
Related Work
Paatero and Tapper [13] introduced NNMA as positive matrix factorization, and they aimed
to minimize kW ? (A ? BC)kF , where W was a fixed nonnegative matrix of weights.
NNMA remained confined to applications in Environmetrics and Chemometrics before
pioneering papers of Lee and Seung [11, 12] popularized the problem. Lee and Seung [11]
provided simple and efficient algorithms for the NNMA problems that sought to minimize
kA ? BCkF and KL(A, BC). Lee & Seung called these problems nonnegative matrix
factorization (NNMF), and their algorithms have inspired our generalizations.
NNMA was applied to a host of applications including text analysis, face/image recognition, language modeling, and speech processing amongst others. We refer the reader to [7]
for pointers to the literature on various applications of NNMA.
Srebro and Jaakola [15] discuss elementwise weighted low-rank approximations without
any nonnegativity constraints. Collins et al. [6] discuss algorithms for obtaining a low rank
approximation of the form A ? BC, where the loss functions are Bregman divergences,
however, there is no restriction on B and C. More recently, Cichocki et al. [4] presented
schemes for NNMA with Csisz?ar?s ?-divergeneces, though rigorous convergence proofs
seem to be unavailable. Our approach of Section 3.2 also yields heuristic methods for
minimizing Csisz?ar?s divergences.
Acknowledgments
This research was supported by NSF grant CCF-0431257, NSF Career Award ACI0093404, and NSF-ITR award IIS-0325116.
References
[1] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with Bregman Divergences. In
SIAM International Conf. on Data Mining, Lake Buena Vista, Florida, April 2004. SIAM.
[2] Y. Censor and S. A. Zenios. Parallel Optimization: Theory, Algorithms, and Applications.
Numerical Mathematics and Scientific Computation. Oxford University Press, 1997.
[3] H. Cho, I. S. Dhillon, Y. Guan, and S. Sra. Minimum Sum Squared Residue based Co-clustering
of Gene Expression data. In Proc. 4th SIAM International Conference on Data Mining (SDM),
pages 114?125, Florida, 2004. SIAM.
[4] A. Cichocki, R. Zdunek, and S. Amari. Csisz?ar?s Divergences for Non-Negative Matrix Factorization: Family of New Algorithms. In 6th Int. Conf. ICA & BSS, USA, March 2006.
[5] M. Collins, R. Schapire, and Y. Singer. Logistic regression, adaBoost, and Bregman distances.
In Thirteenth annual conference on COLT, 2000.
[6] M. Collins, S. Dasgupta, and R. E. Schapire. A Generalization of Principal Components Analysis to the Exponential Family. In NIPS 2001, 2001.
[7] I. S. Dhillon and S. Sra. Generalized nonnegative matrix approximations. Technical report,
Computer Sciences, University of Texas at Austin, 2005.
[8] T. Feng, S. Z. Li, H-Y. Shum, and H. Zhang. Local nonnegative matrix factorization as a
visual representation. In Proceedings of the 2nd International Conference on Development and
Learning, pages 178?193, Cambridge, MA, June 2002.
[9] D. Guillamet, M. Bressan, and J. Vitri`a. A weighted nonnegative matrix factorization for local
representations. In CVPR. IEEE, 2001.
[10] P. O. Hoyer. Non-negative sparse coding. In Proc. IEEE Workshop on Neural Networks for
Signal Processing, pages 557?565, 2002.
[11] D. D. Lee and H. S. Seung. Algorithms for nonnegative matrix factorization. In NIPS, pages
556?562, 2000.
[12] D. D. Lee and H. S. Seung. Learning the parts of objects by nonnegative matrix factorization.
Nature, 401:788?791, October 1999.
[13] P. Paatero and U. Tapper. Positive matrix factorization: A nonnegative factor model with optimal utilization of error estimates of data values. Environmetrics, 5(111?126), 1994.
[14] R. T. Rockafellar. Convex Analysis. Princeton Univ. Press, 1970.
[15] N. Srebro and T. Jaakola. Weighted low-rank approximations. In Proc. of 20th ICML, 2003.
[16] J. A. Tropp. Topics in Sparse Approximation. PhD thesis, The Univ. of Texas at Austin, 2004.
| 2757 |@word version:1 polynomial:1 norm:3 nd:1 open:2 seek:6 decomposition:1 reduction:2 shum:1 document:1 bc:56 past:1 ka:3 must:1 numerical:1 additive:1 analytic:2 drop:1 update:22 advancement:1 pointer:1 simpler:2 zhang:1 prove:2 ica:1 behavior:1 themselves:1 inspired:2 globally:1 encouraging:1 increasing:1 provided:1 notation:1 ghosh:1 guarantee:1 k2:2 utilization:1 grant:1 organize:1 positive:3 before:2 local:2 xyii:1 depended:1 despite:1 oxford:1 ap:1 argminc:1 suggests:1 co:2 ease:1 factorization:10 jaakola:2 practical:1 acknowledgment:1 enforces:1 procedure:1 empirical:1 significantly:2 projection:1 word:1 coclustering:1 convenience:2 interior:1 writing:1 optimize:1 nonnegatively:1 restriction:1 maximizing:1 convex:7 rule:1 deriving:2 proving:1 handle:1 updated:2 construction:2 play:1 suppose:1 recognition:2 approximated:1 utilized:1 asymmetric:2 observed:1 role:1 guillamet:2 initializing:1 decrease:2 mentioned:1 convexity:2 seung:13 dom:1 solving:3 algebra:1 serve:1 upon:2 efficiency:1 easily:4 chapter:1 tx:1 various:1 vista:1 derivation:3 univ:3 describe:1 whose:1 heuristic:1 solve:2 cvpr:1 distortion:2 otherwise:1 amari:1 vitri:1 favor:2 jointly:1 ip:3 sequence:1 differentiable:2 sdm:1 product:1 combining:1 hadamard:1 frobenius:3 csisz:3 chemometrics:1 convergence:2 rbc:1 requirement:1 produce:1 object:1 illustrate:6 derive:2 ij:13 lowrank:1 progress:1 auxiliary:14 c:1 judge:1 guided:1 closely:1 nnmf:1 require:1 generalization:3 preliminary:1 yij:2 extension:2 strictly:1 exp:1 algorithmic:3 scope:1 substituting:1 sought:1 a2:1 proc:3 utexas:1 weighted:7 minimization:2 clearly:1 always:1 gaussian:1 aim:1 attests:1 rather:1 ck:2 avoid:2 derived:3 june:1 rank:4 c1b:1 rigorous:1 censor:1 bt:1 relation:3 issue:1 colt:1 development:2 constrained:1 special:5 construct:2 kw:3 broad:1 represents:1 k2f:2 icml:1 others:2 report:1 few:1 primarily:1 bil:1 randomly:1 divergence:24 mining:3 admitting:1 bregman:14 partial:1 xy:2 old:1 pmf:2 mk:5 increased:1 column:5 modeling:4 ar:3 goodness:1 bressan:1 kn:3 combined:1 cho:1 international:3 siam:4 sequel:1 lee:13 w1:3 squared:1 thesis:1 conf:2 derivative:4 li:1 approximant:1 exclude:1 b2:1 coding:1 coefficient:2 int:3 rockafellar:1 multiplicative:10 start:1 recover:1 parallel:1 minimize:13 formed:1 merugu:1 yield:5 nonsingular:1 multiplying:1 definition:2 frequency:1 involved:2 resultant:1 proof:4 boil:1 recall:1 color:1 dimensionality:2 cj:4 originally:1 follow:1 adaboost:1 april:1 formulation:1 though:1 until:1 tropp:1 nonlinear:3 banerjee:1 lack:1 logistic:1 scientific:1 believe:1 usa:1 usage:1 concept:1 verify:1 multiplier:1 ccf:1 regularization:1 analytically:1 chemical:1 alternating:1 dhillon:4 iteratively:1 bcn:1 please:1 generalized:7 demonstrate:1 cp:14 image:2 recently:1 discussed:1 extend:1 elementwise:5 measurement:2 significant:1 refer:4 cambridge:1 ai:14 mathematics:1 similarly:1 language:2 etc:1 recent:1 suvrit:2 yi:1 minimum:1 unrestricted:2 greater:1 additional:2 care:1 relaxed:1 monotonically:1 signal:1 ii:1 violate:1 technical:1 offer:1 host:1 award:2 a1:2 regression:1 iteration:4 confined:1 c1:2 addition:3 residue:1 thirteenth:1 decreased:1 singular:1 w2:1 rest:2 subject:1 deficient:1 member:1 seem:1 noting:2 easy:3 variety:1 zenios:1 reduce:1 regarding:1 cn:3 idea:2 bmk:6 br:1 itr:1 texas:3 expression:1 penalty:7 speech:2 remark:2 generally:1 useful:2 involve:1 aimed:1 clutter:2 schapire:2 xij:2 exist:1 nsf:3 multifactor:3 dasgupta:1 tsvd:1 breadth:1 utilize:1 year:1 sum:1 place:1 family:3 reader:3 environmetrics:2 lake:1 ct:8 nonnegative:32 annual:1 constraint:9 bp:2 x2:3 argument:2 min:4 separable:2 relatively:1 according:2 popularized:1 combination:1 march:1 legendre:1 conjugate:1 appealing:1 making:2 restricted:2 taken:1 equation:1 previously:2 discus:3 count:1 turn:3 needed:1 bip:6 merit:1 singer:1 generic:3 enforce:1 appropriate:5 florida:2 denotes:2 clustering:5 include:1 ensure:3 approximating:1 classical:1 feng:2 objective:8 question:3 looked:1 concentration:1 hoyer:2 gradient:2 amongst:1 distance:1 link:5 entity:1 topic:1 assuming:1 modeled:1 relationship:3 minimizing:3 october:1 negative:2 implementation:1 truncated:1 extended:2 looking:1 intensity:1 bk:8 introduced:1 namely:1 kl:9 alternately:1 nip:2 below:1 pattern:1 sparsity:1 pioneering:3 including:3 interpretability:2 max:2 power:2 natural:2 treated:1 indicator:1 scheme:2 numerous:1 cichocki:2 text:2 literature:2 kf:1 determining:1 loss:1 interesting:3 srebro:2 sufficient:1 ckn:8 inexactness:1 austin:4 row:1 supported:1 allow:1 wide:1 face:2 sparse:4 distributed:1 bs:1 xn:1 coincide:2 approximate:2 emphasize:1 obtains:1 gene:1 monotonicity:1 kkt:3 b1:4 conclude:1 xi:2 continuous:1 iterative:10 table:3 nature:2 sra:3 career:1 obtaining:2 contributes:1 unavailable:1 domain:1 factorizable:1 did:1 noise:4 arise:2 x1:1 representative:1 fashion:1 nonnegativity:6 wish:4 explicit:1 exponential:3 lie:1 tied:1 answering:1 guan:1 bij:5 formula:3 down:1 remained:1 specific:2 showing:1 zdunek:1 appeal:1 evidence:1 incorporating:3 workshop:1 gained:1 importance:1 phd:1 tapper:3 easier:1 entropy:1 generalizing:1 visual:1 lagrange:1 inderjit:2 paatero:3 monotonic:3 kan:1 ma:1 buena:1 appreciable:1 hard:1 change:1 typical:1 lemma:5 principal:1 called:5 formally:2 quest:1 arises:1 collins:3 incorporate:2 dept:1 princeton:1 handling:1 |
1,935 | 2,758 | A Hierarchical Compositional System for Rapid
Object Detection
Long Zhu and Alan Yuille
Department of Statistics
University of California at Los Angeles
Los Angeles, CA 90095
{lzhu,yuille}@stat.ucla.edu
Abstract
We describe a hierarchical compositional system for detecting deformable objects in images. Objects are represented by graphical models.
The algorithm uses a hierarchical tree where the root of the tree corresponds to the full object and lower-level elements of the tree correspond
to simpler features. The algorithm proceeds by passing simple messages
up and down the tree. The method works rapidly, in under a second,
on 320 ? 240 images. We demonstrate the approach on detecting cats,
horses, and hands. The method works in the presence of background
clutter and occlusions. Our approach is contrasted with more traditional
methods such as dynamic programming and belief propagation.
1
Introduction
Detecting objects rapidly in images is very important. There has recently been great
progress in detecting objects with limited appearance variability, such as faces and text
[1,2,3]. The use of the SIFT operator also enables rapid detection of rigid objects [4]. The
detection of such objects can be performed in under a second even in very large images
which makes real time applications practical, see [3].
There has been less progress for the rapid detection of deformable objects, such as hands,
horses, and cats. Such objects can be represented compactly by graphical models, see
[5,6,7,8], but their variations in shape and appearance makes searching for them considerably harder.
Recent work has included the use of dynamic programming [5,6] and belief propagation
[7,8] to perform inference on these graphical models by searching over different spatial
configurations. These algorithms are successful at detecting objects but pruning was required to obtain reasonable convergence rates [5,7,8]. Even so, algorithms can take minutes
to converge on images of size 320 ? 240.
In this paper, we propose an alternative methods for performing inference on graphical
models of deformable objects. Our approach is based on representing objects in a probabilistic compositional hierarchical tree structure. This structure enables rapid detection of
objects by passing messages up and down the tree structure. Our approach is fast with a
typical speed of 0.6 seconds on a 320 ? 240 image (without optimized code).
Our approach can be applied to detect any object that can be represented by a graphical model. This includes the models mentioned above [5,6,7,8], compositional models
[9], constellation models [10], models using chamfer matching [11] and models using deformable blur filters [12].
2
Background
Graphical models give an attractive framework for modeling object detection problems in
computer vision. We use the models and notation described in [8].
The positions of feature points on the object are represented by {xi : i ? ?}. We augment
this representation to include attributes of the points and obtain a representation {qi : i ?
?}. These attributes can be used to model the appearance of the features in the image.
For example, a feature point can be associated with an oriented intensity edge and qi can
represent the orientation [8]. Alternatively, the attribute could represent the output of a
blurred edge filter [12], or the appearance properties of a constellation model part [10].
There is a prior probability distribution on the configuration of the model P ({qi }) and a
likelihood function for generating the image data P (D|{qi }). We use the same likelihood
model as [8]. Our priors are similar to [5,8,12], being based on deformations away from a
prototype template.
Inference consists of maximizing the posterior P ({qi }|D) = P (D|{qi })P ({qi })/P (D).
As described in [8], this corresponds to a maximizing a posterior of form:
Y
1 Y
?i (qi )
?ij (qi , qj ),
(1)
P ({qi }|D) =
Z i
i,j
where {?i (qi )} and {?ij (qi , qj )} are the unary and pairwise potentials of the graph. The
unary potentials model how well the individual features match to positions in the image.
The binary potentials impose (probabilistic) constraints about the spatial relationships between feature points.
Algorithms such as dynamic programming [5,6] and belief propagation [7,8] have been
used to search for optima of P ({qi }|D). But the algorithms are time consuming because
each state variable qi can take a large number of values (each feature point on the template
can, in principle, match any point in the 240 ? 320 image). Pruning and other ingenious
techniques are used to speed up the search [5,7,8]. But performance remains at speeds of
seconds to minutes.
3
The Hierarchical Compositional System
We define a compositional hierarchy by breaking down the representation {qi : i ? ?} into
substructures which have their own probability models.
At the first level, we group elements into K1 subsets {qi : i ? Sa1 } where ? =
1
1
1
1
?K
a=1 Sa , Sa ? Sb = ?, a 6= b. These subsets correspond to meaningful parts of the
object, such as ears and other features. See figure (1) for the basic structure. Specific
examples for cats and horses will be given later.
For each of these subsets we define a generative model Pa (D|{qi : i ? Sa1 }) and a prior
Pa ({qi : i ? Sa1 }). These generative and prior models are inherited from the full model,
see equation (1), by simply cutting the connections between the subset Sa1 and the ?/Sa1
(the remaining features on the object). Hence
1 Y
?i (qi )
Pa1 (D|{qi : i ? Sa1 }) =
Za1
1
i?Sa
Figure 1: The Hierarchical Compositional structure. The full model contains all the nodes
S13 . This is decomposed into subsets S12 , S22 , S32 corresponding to sub-features. These, in
turn, can be decomposed into subsets corresponding to more elementary features.
Pa1 ({qi : i ? Sa1 })
=
1 Y
?ij (qi , qj ).
Z?a1
1
(2)
i,j?Sa
We repeat the same process at the second and higher levels. The subsets {Sa1 : a =
1, ..., K1 } are composed to form a smaller selection of subsets {Sb2 : b = 1, ..., K2 }, so
2
2
2
1
2
2
that ? = ?K
a=1 Sa , Sa ? Sb = ?, a 6= b and each Sa is contained entirely inside one Sb .
2
Again the Sb are selected to correspond to meaningful parts of the object. Their generative
models and prior distributions are again obtained from the full model, see equation (1). by
cutting them off the links to the remaining nodes ?/Sb2 .
The algorithm is run using two thresholds T1 , T2 . For each subset, say Sa1 , we define the
evidence to be Pa1 (D|{zi? : i ? Sa1 })Pa1 ({zi? : i ? Sa1 }). We determine all possible
configurations {zi? : i ? Sa1 } such that evidence of each configuration is above T1 . This
gives a (possibly large) set of positions for the {qi : i ? Sa1 }. We apply non-maximum
suppression to reduce many similar configurations in same local area to the one with maximum evidence (measured locally). We observe that a little displacement of position does
not change optimality much for upper level matching. Typically, non-maximum suppression keeps around 30 ? 500 candidate configurations for each node. These remaining
configurations can be considered as proposals [13] and are passed up the tree to the subset Sb2 which contains Sa1 . Node Sb2 evaluates the proposals to determine which ones are
consistent, thus detecting composites of the subfeatures.
There is also top-down message passing which occurs when one part of a node Sb2 contains
high evidence ? e.g. Pa1 (D|{zi? : i ? Sa1 })Pa1 ({zi? : i ? Sa1 }) > T2 ? but the other
child nodes have no consistent values. In this case, we allow the matching to proceed if the
combined matching strength is above threshold T1 . This mechanism enables the high-level
models and, in particular, the priors for the relative positions of the sub-nodes to overcome
weak local evidence. This performs a similar function to Coughlan and Shen?s dynamic
quantization scheme [8].
More sophisticated versions of this approach can be considered. For example, we could use
the proposals to activate a data driven Monte Carlo Markov Chain (DDMCMC) algorithm
[13]. To our knowledge, the use of hierarchical proposals of this type is unknown in the
Monte Carlo sampling literature.
4
Experimental Results
We illustrate our hierarchical compositional system on examples of cats, horses, and hands.
The images include background clutter and the objects can be partially occluded.
Figure 2: The prototype cat (top left panel), edges after grouping (top right panel), prototype template for ears and top of head (bottom left panel), and prototype for ears and eyes
(bottom right panel). 15 points are used for the ears and 24 for the head.
First we preprocess the image using a Canny edge detector followed by simple edge grouping which eliminates isolated edges. Edge detection and edge grouping is illustrated in the
top panels of figure (2). This figure is used to construct a prototype template for the ears,
eyes, and head ? see bottom panels of figure (2).
We construct a graphical model for the cat as described in section (2). Then we define a
hierarchical structure, see figure (3).
Figure 3: Hierarchy Structure for Cat Template.
Next we illustrate the results on several cat images, see figure (4). Several of these images
were used in [8] and we thank Coughlan and Shen for supplying them. In all examples, our
algorithm detects the cat correctly despite the deformations of the cat from the prototype,
see figure (2). The detection was performed in less than 0.6 seconds (with unoptimized
code). The images are 320 ? 240 and the preprocessing time is included.
The algorithm is efficient since the subfeatures give bottom-up proposals which constraint
the positions of the full model. For example, figure (5) shows the proposals for ears for the
cluttered cat image (center panel of figure (4).
Figure 4: Cat with Occlusion (top panels). Cat with clutter (centre panel). Cat with eyes
(bottom panel).
We next illustrate our approach on the tasks of detecting horses. This requires a more
complicated hierarchy, see figure (6).
The algorithm succeeds in detecting the horse, see right panels of figure (7), using the
prototype template shown in the left panel of figure (7).
Finally, we illustrate this approach for the much studied task of detecting hands, see [5,11].
Our approach detects hand from the Cambridge dataset in under a second, see figure (8).
(We are grateful to Thayananthan, Stenger, Torr, and Cipolla for supplying these images).
Figure 5: Cat Proposals: Left ears (left three panels). Right ears (right three panels).
Figure 6: Horse Hierarchy. This is more complicated than the cat.
Figure 7: The left panels show the prototype horse (top left panel) and its feature points
(bottom left panel). The right panel shows the input image (top right panel) and the position
of the horse as detected by the algorithm (bottom right panel).
Figure 8: Prototype hand (top left panel), edge map of prototype hand (bottom left panel),
Test hand (top right panel), Test hand edges (bottom right panel). 40 points are used.
5
Comparison with alternative methods
We ran the algorithm on image of typical size 320?240. There were usually 4000 segments
after edge grouping. The templates had between 15 and 24 points. The average speed was
0.6 seconds on a laptop with 1.6 G Intel Pentium CPU (including all processing: edge
detector, edge grouping, and object detection.
Other papers report times of seconds to minutes for detecting deformable objects from
similar images [5,6,7,8]. So our approach is up to 100 times faster.
The Soft-Assign method in [15] has the ability to deal with objects with around 200 key
points, but requires the initialization of the template to be close to the target object. This
requirement is not practical in many applications. In our proposed method, there is no need
to initialize the template near to the target.
Our hierarchical compositional tree structure is similar to the standard divide and conquer
strategy used in some computer science algorithms. This may roughly be expected to
scale as log N where N is the number of points on the deformable template. But precise
complexity convergence results are difficult to obtain because they depend on the topology
of the template, the amount of clutter in the background, and other factors.
This approach can be applied to any graphical model such as [10,12]. It is straightforward
to design hierarchial compositional structures for objects based on their natural decompositions into parts.
There are alternative, and more sophisticated ways, to perform inference on graphical models by decomposing them into sub-graphs, see for example [14]. But these are typically far
more computationally demanding.
6
Conclusion
We have presented a hierarchical compositional system for rapidly detecting deformable
objects in images by performing inference on graphical models. Computation is performed
by passing messages up and down the tree. The systems detects objects in under a second
on images of size 320 ? 240. This makes the approach practical for real world applications.
Our approach is similar in spirit to DDMCMC [13] in that we use proposals to guide
the search for objects. In this paper, the proposals are based on a hierarchy of features
which enables efficient computation. The low-level features propose more complex features which are validated by the probability models of the complex features. We have not
found it necessary to perform stochastic sampling, though it is straightforward to do so in
this framework.
Acknowledgments
This research was supported by NSF grant 0413214.
References
[1] Viola, P. and Jones, M. (2001). ?Fast and Robust Classification using Asymmetric AdaBoost and
a Detector Cascade?. In Proceedings NIPS01.
[2] Schniederman, H. and Kanade, T. (2000). ?A Statistical method for 3D object detection applied
to faces and cars?. In Computer Vision and Pattern Recognition.
[3] Chen, X. and Yuille, A.L. (2004). AdaBoost Learning for Detecting and Reading Text in City
Scenes. Proceedings CVPR.
[4] Lowe, D.G. (1999). ?Object recognition from local scale-invariant features.? In Proc. International Conference on Computer Vision ICCV. Corfu, pages 1150-1157.
[5] Coughlan, J.M., Snow, D., English, C. and Yuille, A.L. (2000). ?Efficient Deformable Template
Detection and Localization without User Initialization?. Computer Vision and Image Understanding.
78, pp 303-319.
[6] Felzenswalb, P. (2005). ?Representation and Detection of Deformable Shapes?. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 2.
[7] Coughlan, J.M., and Ferreira, S. (2002). ?Finding Deformable Shapes using Loopy Belief Propoagation?. In Proceedings European Conference of Computer Vision.. 2002.
[8] Coughlan, J.M., and Shen, H. (2004). ?Shape Matching with Belief Propagation: Using Dynamic
Quantization to Accomodate Occlusion and Clutter?. In GMBV .
[9] Geman, S. Potter, D. and Chi, Z. (2002). ? Composition systems?. Quarterly of Applied Mathematics, LX, pp 707-736.
[10] Fergus, R., Perona, P. and Zisserman, A. (2003) ?Object Class Recognition by Unsupervised
Scale-Invariant Learning?. Proceeding CVPR. (2), pp 264-271.
[11] Thayananthan, A. Stenger, B., Torr, P. and Cipolla, R. (2003). ?Shape context and chamfer
matching in cluttered scenes,? In Proc. Conf. Comp. Vision Pattern Rec., pp. 127?133.
[12] Berg, A.C., Berg, T.L., and Malik, J. (2005). ?Shape Matching and Object Recognition using
Low Distortion Correspondence?. Proceedings CVPR.
[13] Tu, Z., Chen, X., Yuille, A.L., and Zhu, S.C. (2003). ?Image Parsing: Unifying Segmentation,
Detection, and Recognition?. In Proceedings ICCV.
[14] Wainwright, M.J., Jaakkola, T.S., and Willsky., A.S. ?Tree-Based Reparamterization Framework
for Analysis of Sum-Product and Related Algorithms?. IEEE Transactions on Information Theory.
Vol. 49, pp 1120-1146. No. 5. 2003.
[15] Chui,H. and Rangarajan, A., A New Algorithm for Non-Rigid Point Matching. In Proceedings
CVPR 2000.
| 2758 |@word version:1 decomposition:1 harder:1 configuration:7 contains:3 parsing:1 blur:1 shape:6 enables:4 generative:3 selected:1 intelligence:1 coughlan:5 supplying:2 detecting:12 node:7 lx:1 simpler:1 consists:1 inside:1 pairwise:1 expected:1 rapid:4 roughly:1 chi:1 detects:3 decomposed:2 little:1 cpu:1 notation:1 panel:24 laptop:1 finding:1 ferreira:1 k2:1 grant:1 t1:3 local:3 despite:1 initialization:2 studied:1 limited:1 practical:3 acknowledgment:1 nips01:1 displacement:1 area:1 cascade:1 composite:1 matching:8 close:1 selection:1 operator:1 context:1 map:1 center:1 maximizing:2 straightforward:2 cluttered:2 shen:3 searching:2 variation:1 hierarchy:5 target:2 user:1 programming:3 us:1 pa:2 element:2 recognition:5 rec:1 asymmetric:1 geman:1 bottom:9 ran:1 mentioned:1 complexity:1 occluded:1 dynamic:5 grateful:1 depend:1 segment:1 yuille:5 localization:1 compactly:1 represented:4 cat:16 fast:2 describe:1 activate:1 monte:2 detected:1 horse:9 cvpr:4 say:1 distortion:1 ability:1 statistic:1 propose:2 product:1 canny:1 tu:1 rapidly:3 deformable:10 los:2 convergence:2 optimum:1 requirement:1 rangarajan:1 generating:1 object:33 illustrate:4 stat:1 measured:1 ij:3 progress:2 sa:7 snow:1 attribute:3 filter:2 stochastic:1 assign:1 elementary:1 around:2 considered:2 great:1 proc:2 s12:1 city:1 jaakkola:1 validated:1 likelihood:2 pentium:1 suppression:2 detect:1 inference:5 rigid:2 unary:2 sb:4 typically:2 perona:1 unoptimized:1 classification:1 orientation:1 augment:1 spatial:2 initialize:1 construct:2 sampling:2 s22:1 jones:1 unsupervised:1 t2:2 report:1 oriented:1 composed:1 individual:1 occlusion:3 detection:13 message:4 chain:1 edge:13 necessary:1 tree:10 divide:1 deformation:2 isolated:1 modeling:1 soft:1 loopy:1 subset:10 successful:1 considerably:1 combined:1 international:1 stenger:2 probabilistic:2 off:1 again:2 ear:8 possibly:1 conf:1 corfu:1 s13:1 potential:3 includes:1 blurred:1 performed:3 root:1 later:1 lowe:1 complicated:2 inherited:1 substructure:1 correspond:3 preprocess:1 weak:1 carlo:2 comp:1 detector:3 evaluates:1 pp:5 associated:1 dataset:1 knowledge:1 car:1 segmentation:1 sophisticated:2 higher:1 adaboost:2 zisserman:1 though:1 hand:9 propagation:4 hence:1 illustrated:1 deal:1 attractive:1 demonstrate:1 performs:1 image:24 recently:1 composition:1 cambridge:1 mathematics:1 centre:1 had:1 posterior:2 own:1 recent:1 driven:1 binary:1 impose:1 converge:1 determine:2 full:5 alan:1 match:2 faster:1 long:1 a1:1 qi:23 basic:1 vision:6 represent:2 proposal:9 background:4 eliminates:1 spirit:1 near:1 presence:1 subfeatures:2 zi:5 topology:1 reduce:1 prototype:10 angeles:2 qj:3 passed:1 passing:4 proceed:1 compositional:11 sb2:5 clutter:5 amount:1 locally:1 nsf:1 correctly:1 vol:2 group:1 key:1 threshold:2 lzhu:1 graph:2 sum:1 run:1 pa1:6 reasonable:1 entirely:1 followed:1 correspondence:1 strength:1 constraint:2 scene:2 ucla:1 chui:1 speed:4 optimality:1 performing:2 department:1 smaller:1 invariant:2 iccv:2 computationally:1 equation:2 remains:1 turn:1 mechanism:1 decomposing:1 apply:1 observe:1 hierarchical:11 away:1 quarterly:1 alternative:3 top:10 remaining:3 include:2 graphical:10 unifying:1 hierarchial:1 k1:2 conquer:1 malik:1 ingenious:1 occurs:1 strategy:1 traditional:1 link:1 thank:1 willsky:1 potter:1 code:2 relationship:1 difficult:1 design:1 unknown:1 perform:3 upper:1 markov:1 viola:1 variability:1 head:3 precise:1 intensity:1 required:1 optimized:1 connection:1 california:1 sa1:16 proceeds:1 usually:1 pattern:3 reading:1 including:1 belief:5 wainwright:1 demanding:1 natural:1 zhu:2 representing:1 scheme:1 eye:3 text:2 prior:6 literature:1 understanding:1 relative:1 gmbv:1 consistent:2 principle:1 repeat:1 supported:1 english:1 guide:1 allow:1 template:12 face:2 overcome:1 world:1 preprocessing:1 far:1 transaction:2 pruning:2 cutting:2 keep:1 consuming:1 xi:1 fergus:1 alternatively:1 search:3 kanade:1 robust:1 ca:1 complex:2 european:1 child:1 intel:1 sub:3 position:7 candidate:1 breaking:1 down:5 minute:3 chamfer:2 specific:1 sift:1 constellation:2 evidence:5 grouping:5 thayananthan:2 quantization:2 felzenswalb:1 accomodate:1 chen:2 simply:1 appearance:4 contained:1 partially:1 cipolla:2 corresponds:2 s32:1 change:1 included:2 typical:2 torr:2 contrasted:1 experimental:1 succeeds:1 meaningful:2 berg:2 |
1,936 | 2,759 | Analysis of Spectral Kernel Design based
Semi-supervised Learning
Tong Zhang
Yahoo! Inc.
New York City, NY 10011
Rie Kubota Ando
IBM T. J. Watson Research Center
Yorktown Heights, NY 10598
Abstract
We consider a framework for semi-supervised learning using spectral
decomposition based un-supervised kernel design. This approach subsumes a class of previously proposed semi-supervised learning methods
on data graphs. We examine various theoretical properties of such methods. In particular, we derive a generalization performance bound, and
obtain the optimal kernel design by minimizing the bound. Based on
the theoretical analysis, we are able to demonstrate why spectral kernel
design based methods can often improve the predictive performance. Experiments are used to illustrate the main consequences of our analysis.
1 Introduction
Spectral graph methods have been used both in clustering and in semi-supervised learning.
This paper focuses on semi-supervised learning, where a classifier is constructed from both
labeled and unlabeled training examples. Although previous studies showed that this class
of methods work well for certain concrete problems (for example, see [1, 4, 5, 6]), there
is no satisfactory theory demonstrating why (and under what circumstances) such methods
should work.
The purpose of this paper is to develop a more complete theoretical understanding for graph
based semi-supervised learning. In Theorem 2.1, we present a transductive formulation of
kernel learning on graphs which is equivalent to supervised kernel learning. This new
kernel learning formulation includes some of the previous proposed graph semi-supervised
learning methods as special cases. A consequence is that we can view such graph-based
semi-supervised learning methods as kernel design methods that utilize unlabeled data; the
designed kernel is then used in the standard supervised learning setting. This insight allows
us to prove useful results concerning the behavior of graph based semi-supervised learning
from the more general view of spectral kernel design. Similar spectral kernel design ideas
also appeared in [2]. However, they didn?t present a graph-based learning formulation
(Theorem 2.1 in this paper); nor did they study the theoretical properties of such methods.
We focus on two issues for graph kernel learning formulations based on Theorem 2.1. First,
we establish the convergence of graph based semi-supervised learning (when the number
of unlabeled data increases). Second, we obtain a learning bound, which can be used to
compare the performance of different kernels. This analysis gives insights to what are good
kernels, and why graph-based spectral kernel design is often helpful in various applications.
Examples are given to justify the theoretical analysis. Due to the space limitations, proofs
will not be included in this paper.
2 Transductive Kernel Learning on Graphs
We shall start with notations for supervised learning. Consider the problem of predicting
a real-valued output Y based on its corresponding input vector X. In the standard machine learning formulation, we assume that the data (X, Y ) are drawn from an unknown
underlying distribution D. Our goal is to find a predictor p(x) so that the expected true
loss of p given below is as small as possible: R(p(?)) = E(X,Y )?D L(p(X), Y ), where we
use E(X,Y )?D to denote the expectation with respect to the true (but unknown) underlying
distribution D. Typically, one needs to restrict the hypothesis function family size so that a
stable estimate within the function family can be obtained from a finite number of samples.
We are interested in learning in Hilbert spaces. For notational simplicity, we assume that
there is a feature representation ?(x) ? H, where H is a high (possibly infinity) dimensional feature space. We denote ?(x) by column vectors, so that the inner product in the
Hilbert-space H is the vector product. A linear classifier p(x) on H can be represented by
a vector w ? H such that p(x) = wT ?(x).
Let the training samples be (X1 , Y1 ), . . . , (Xn , Yn ). We consider the following regularized
linear prediction method on H:
" n
#
X
1
p?(x) = w
?T ?(x), w
? = arg min
L(wT ?(Xi ), Yi ) + ?wT w .
(1)
w?H n
i=1
If H is an infinite dimensional space, then it is not be feasible to solve (1) directly. A
remedy is to use kernel methods. Given a feature representation ?(x), we can define kernel
k(x, x0 ) = ?(x)T ?(x0 ). It is well-known P
(the so-called representer theorem) that the
solution of (1) can be represented as p?(x) = ni=1 ?
? i k(Xi , x), where [?
?i ] is given by
?
?
?
?
n
n
n
X
X
X
1
[?
?i ] = arg min n ?
L?
?j k(Xi , Xj ), Yi ? + ?
?i ?j k(Xi , Xj )? . (2)
[?i ]?R
n i=1
j=1
i,j=1
The above formulations of kernel methods are standard. In the following, we present an
equivalence of supervised kernel learning to a specific semi-supervised formulation. Although this representation is implicit in some earlier papers, the explicit form of this method
is not well-known. As we shall see later, this new kernel learning formulation is critical for
analyzing a class of graph-based semi-supervised learning methods.
In this framework, the data graph consists of nodes that are the data points Xj . The edge
connecting two nodes Xi and Xj is weighted by k(Xi , Xj ). The following theorem, which
establishes the graph kernel learning formulation we will study in this paper, essentially
implies that graph-based semi-supervised learning is equivalent to the supervised learning
method which employs the same kernel.
Theorem 2.1 (Graph Kernel Learning) Consider labeled data {(Xi , Yi )}i=1,...,n and
unlabeled data Xj (j = n + 1, . . . , m). Consider real-valued vectors f = [f1 , . . . , fm ]T ?
Rm , and the following semi-supervised learning method:
" n
#
1X
T
?1
?
f = arg infm
L(fi , Yi ) + ?f K f ,
(3)
f ?R
n i=1
where K (often called gram-matrix in kernel learning or affinity matrix in graph learning)
is an m ? m matrix with Ki,j = k(Xi , Xj ) = ?(Xi )T ?(Xj ). Let p? be the solution of (1),
then f?j = p?(Xj ) for j = 1, . . . , m.
The kernel gram matrix K is always positive semi-definite. However, if K is not full rank
(singular), then the correct interpretation of f T K ?1 f is lim??0+ f T (K + ?Im?m )?1 f ,
where Im?m is the m ? m identity matrix. If we start with a given kernel k and let
K = [k(Xi , Xj )], then a semi-supervised learning method of the form (3) is equivalent
to the supervised method (1). It follows that with a formulation like (3), the only way to
? in (3), or k by k? in (2), where K
? (or
utilize unlabeled data is to replace K by a kernel K
? depends on the unlabeled data. In other words, the only benefit of unlabeled data in this
k)
setting is to construct a good kernel based on unlabeled data.
Some of previous graph-based semi-supervised learning methods employ the same formulation (3) with K ?1 replaced by the graph Laplacian operator L (which we will describe
in Section 5). However, the equivalence of this formulation and supervised kernel learning
(with kernel matrix K = L?1 ) was not obtained in these earlier studies. This equivalence is
important for good theoretical understanding, as we will see later in this paper. Moreover,
by treating graph-based supervised learning as unsupervised kernel design (see Figure 1),
the scope of this paper is more general than graph Laplacian based methods.
Input: labeled data [(Xi , Yi )]i=1,...,n , unlabeled data Xj (j = n + 1, . . . , m)
shrinkage factors sj ? 0 (j = 1, . . . , m), kernel function k(?, ?),
Output: predictive values f?j0 on Xj (j = 1, . . . , m)
Form the kernel matrix K = [k(Xi , Xj )] (i, j = 1, . . . , m)
Compute thePkernel eigen-decomposition:
T
K=m m
, vj ) are eigenpairs of K (vjT vj = 1)
j=1 ?j vj vj , where (?jP
? = m m sj ?j vj v T (?)
Modify the kernel matrix as: K
j
Pn j=1
? ?1 f .
Compute f?0 = arg minf ?Rm n1 i=1 L(fi , Yi ) + ?f T K
Figure 1: Spectral kernel design based semi-supervised learning on graph
In Figure 1, we consider a general formulation of semi-supervised learning method on data
graph through spectral kernel design. This is the method we will analyze in the paper. As
a special case, we can let sj = g(?j ) in Figure 1, where g is a rational function, then
? = g(K/m)K. In this special case, we do not have to compute eigen-decomposition of
K
K. Therefore we obtain a simpler algorithm with the (?) in Figure 1 replaced by
? = g(K/m)K.
K
(4)
As mentioned earlier, the idea of using spectral kernel design has appeared in [2] although
they didn?t base their method on the graph formulation (3). However, we believe our analysis also sheds lights to their methods. The semi-supervised learning method described in
Figure 1 is useful only when f?0 is a better predictor than f? in Theorem 2.1 (which uses the
? is better than K.
original kernel K) ? in other words, only when the new kernel K
In the next few sections, we will investigate the following issues concerning the theoretical
behavior of this algorithm: (a) the limiting behavior of f?0 as m ? ?; that is, whether f?j0
converges for each j; (b) the generalization performance of (3); (c) optimal Kernel design
by minimizing the generalization error, and its implications; (d) statistical models under
which spectral kernel design based semi-supervised learning is effective.
3 The Limiting Behavior of Graph-based Semi-supervised Learning
We want to show that as m ? ?, the semi-supervised algorithm in Figure 1 is wellbehaved. That is, f?j0 converges as m ? ?. This is one of the most fundamental issues.
Using feature space representation, we have k(x, x0 ) = ?(x)T ?(x0 ). Therefore a change
of kernel can be regarded as a change of feature mapping. In particular, we consider a
?
feature transformation of the form ?(x)
= S 1/2 ?(x), where S is an appropriate positive
semi-definite operator on H. The following result establishes an equivalent feature space
formulation of the semi-supervised learning method in Figure 1.
Theorem
notations in Figure 1. Assume k(x, x0 ) = ?(x)T ?(x0 ). Consider
Pm 3.1 Using
?
T
S = j=1 sj uj uj , where uj = ?vj / ?j , ? = [?(X1 ), . . . , ?(Xm )], then (?j , uj ) is
an eigenpair of ??T /m. Let
" n
#
1X
0
0T 1/2
0
T 1/2
T
p? (x) = w
? S ?(x),
w
? = arg min
L(w S ?(Xi ), Yi ) + ?w w .
w?H n
i=1
Then f?j0 = p?0 (Xj ) (j = 1, . . . , m).
The asymptotic behavior of Figure 1 when m ? ? can bePeasily understood from
m
1
T
Theorem 3.1. In this case, we just replace ??T /m = m
by
j=1 ?(Xj )?(Xj )
T
T
EX ?(X)?(X) . The spectral decomposition of EX ?(X)?(X) corresponds to the feature space PCA. It is clear that if S converges, then the feature space algorithm in Theorem 3.1 also converges. In general, S converges if the eigenvectors uj converges and the
shrinkage factors sj are bounded. As a special case, we have the following result.
Theorem 3.2 Consider a sequence of data X1 , X2 , . . . drawn
Pm from a distribution, with
only the first n points labeled. Assume when m ? ?, j=1 ?(Xj )?(Xj )T /m converges to EX ?(X)?(X)T almost surely, and g is a continuous function in the spectral range of EX ?(X)?(X)T . Now in Figure 1 with (?) given by (4) and kernel
k(x, x0 ) = ?(x)T ?(x0 ), f?j0 converges almost surely for each fixed j.
4 Generalization analysis on graph
We study the generalization behavior of graph based semi-supervised learning algorithm
(3), and use it to compare different kernels. We will then use this bound to justify the kernel design method given in Section 2. To measure the sample complexity, we consider m
points (Xj , Yj ) for i = 1, . . . , m. We randomly pick n distinct integers i1 , . . . , in from
{1, . . . , m} uniformly (sample without replacement), and regard it as the n labeled training data. We obtain predictive values f?j on the graph using the semi-supervised learning
method (3) with the labeled data, and test it on the remaining m ? n data points. We are
interested in the average predictive performance over all random draws.
Theorem 4.1 Consider (Xj , Yj ) for i = 1, . . . , m. Assume that we randomly pick n distinct integers i1 , . . . , in from {1, . . . , m} uniformly (sample without replacement), and denote it by Zn . Let f?(Zn ) be the semi-supervised learning method (3) using training data in
P
?
Zn : f?(Zn ) = arg minf ?Rm n1 i?Zn L(fi , Yi ) + ?f T K ?1 f . If | ?p
L(p, y)| ? ?, and
L(p, y) is convex with respect to p, then we have
?
?
m
2
X
X
1
1
?
tr(K)
?.
EZn
L(f?j (Zn ), Yj ) ? infm ?
L(fj , Yj ) + ?f T K ?1 f +
f ?R
m?n
m j=1
2?nm
j ?Z
/ n
The bound depends on the regularization parameter ? in addition to the kernel K. In order
to compare different kernels, it is reasonable to compare the bound with the optimal ? for
each K. That is, in addition to minimizing f , we also minimize over ? on the right hand of
the bound. Note that in practice, it is usually not difficult to find a nearly-optimal ? through
cross validation, implying that it is reasonable to assume that we can choose the optimal ?
in the bound. With the optimal ?, we obtain:
?
?
m
X
X
p
1
1
?
R(f, K)? ,
L(f?j (Zn ), Yj ) ? infm ?
L(fj , Yj ) + ?
EZn
f ?R
m?n
m j=1
2n
j ?Z
/
n
where R(f, K) = tr(K/m) f T K ?1 f is the complexity of f with respect to kernel K.
? as in Figure 1, then the complexity of a function f with respect to K
? is given
If we define K
Pm
Pm
2
?
by R(f, K) = ( j=1 sj ?j )( j=1 ?j /(sj ?j )). If we believe that a good approximate
P
target function f can be expressed as f = j ?j vj with |?j | ? ?j for some known ?j ,
then based on this belief, the optimal choice of the shrinkage
factor becomes sj = ?j /?j .
P
T
? =
That is, the kernel that optimizes the bound is K
?
v
v
j
j
j , where vj are normalized
jP
2
? ? (
eigenvectors of K. In this case, we have R(f, K)
j ?j ) . The eigenvalues of the
optimal kernel is thus independent of K, but depends only on the spectral coefficient?s
range ?j of the approximate target function.
Since there is no reason to believe that the eigenvalues ?j of the original kernel K are
proportional to the target spectral coefficient range. If we have some guess of the spectral
coefficients of the target, then one may use the knowledge to obtain a better kernel. This
justifies why spectral kernel design based algorithm can be potentially helpful (when we
have some information on the target spectral coefficients). In practice, it is usually difficult to have a precise guess of ?j . However, for many application problems, we observe in
practice that the eigenvalues of kernel K decays more slowly than that of the target spectral
coefficients. In this case, our analysis implies that we should use an alternative kernel with
faster eigenvalue decay: for example, using K 2 instead of K. This has a dimension reduction effect. That is, we effectively project the data into the principal components of data.
The intuition is also quite clear: if the dimension of the target function is small (spectral
coefficient decays fast), then we should project data to those dimensions by reducing the
remaining noisy dimensions (corresponding to fast kernel eigenvalue decay).
5 Spectral analysis: the effect of input noise
We provide a justification on why spectral coefficients of the target function often decay
faster than the eigenvalues of a natural kernel K. In essence, this is due to the fact that
input vector X is often corrupted with noise. Together with results in the previous section,
we know that in order to achieve optimal performance, we need to use a kernel with faster
eigenvalue decay. We will demonstrate this phenomenon under a statistical model, and use
the feature space notation in Section 3. For simplicity, we assume that ?(x) = x.
We consider a two-class classification problem in R? (with the standard 2-norm innerproduct), where the label Y = ?1. We first start with a noise free model, where the data can
be partitioned into p clusters. Each cluster ` is composed of a single center point x
?` (having
zero variance) with label y?` = ?1. In this model, assume that the centers are well separated
so that there is a weight vector w? such that w?T w? < ? and w?T x?` = y?` . Without loss
of generality, we may assume that x
?` and w? belong to a p-dimensional subspace Vp . Let
?
Vp be its orthogonal complement. Assume now that the observed input data are corrupted
with noise. We first generate a center index `, and then noise ? (which may depend on
`). The observed input data is the corrupted data X = x?` + ?, and the observed output is
Y = w?T x
?` . In this model, let `(Xi ) be the center corresponding to Xi , the observation
can be decomposed as: Xi = x
?`(Xi ) + ?(Xi ), and Yi = w?T x
?`(Xi ) . Given noise ?, we
decompose it as ? = ?1 + ?2 where ?1 is the orthogonal projection of ? in Vp , and ?2 is
the orthogonal projection of ? in Vp? . We assume that ?1 is a small noise component; the
component ?2 can be large but has small variance in every direction.
Theorem 5.1 Consider the data generation model in this section, with observation X =
x
?` + ? and Y = w?T x?` . Assume that ? is conditionally zero-mean given `: E?|` ? = 0.
P
Let EXX T = j ?j uj uTj be the spectral decomposition with decreasing eigenvalues ?j
(uTj uj = 1). Then the following claims are valid: let ?12 ? ?22 ? ? ? ? be the eigenvalues of E?2 ?2T , then ?j ? ?j2 ; if k?1 k2 ? b/kw? k2 , then |w?T Xi ? Yi | ? b; ?t ? 0,
P
T
2 ?t
T
?` x
?T` )?t w? .
j?1 (w? uj ) ?j ? w? (E x
P
Consider m points X1 , . . . , Xm . Let ? = [X1 , . . . , Xm ] and K = ?T ? = m j ?j vj vjT
?
be the kernel spectral decomposition. Let uj = ?vj / m?j , fi = w?T Xi , and f =
P
?
is not difficult to verify that ?j = m?j w?T uj . If we assume that
j ?j vj . Then itP
1
T
T
asymptotically m m
i=1 Xi Xi ? EXX , then we have the following consequences:
? fi = w?T Xi is a good approximate target when b is small. In particular, if b < 1,
then this function always gives the correct class label.
1+t
1 Pm
2
? For all t > 0, the spectral coefficient ?j of f decays as m
?
j=1 ?j /?j
T
T ?t
w? (E?
x` x?` ) w? .
? The eigenvalue ?j decays slowly when the noise spectral decays slowly: ?j ? ?j2 .
If the clean data are well behaved in that we can find a weight vector such that
w?T (EX x
?`(X) x?T`(X) )?t w? is bounded for some t > 1, then when the data are corrupted
with noise, we can find a good approximate target that has spectral decay faster (on average) than that of the kernel eigenvalues. This analysis implies that if the feature representation associated with the original kernel is corrupted with noise, then it is often helpful
to use a kernel with faster spectral decay. For example, instead of using K, we may use
? = K 2 . However, it may not be easy to estimate the exact decay rate of the target spectral
K
coefficients. In practice, one may use cross validation to optimize the kernel.
A kernel with fast spectral decay projects the data into the most prominent principal components. Therefore we are interested in designing kernels which can achieve a dimension reduction effect. Although one may use direct eigenvalue computation, an alternative is to use
a function g(K/m)K for such an
P effect, as in (4). For example, we may consider a normalized kernel such that K/m = j ?j uj uTj where 0 ? uj ? 1. A standard normalization
method is to use D?1/2 KD?1/2 , where D is the diagonal matrix
P with each entry corresponding to the row sums of K. It follows that g(K/m)K = m j g(?j )?j uj uTj . We are
interested in a function g such that g(?)? ? 1 when ? ? [?, 1] for some ?, and g(?)? ? 0
when ? < ? (where ? is close to 1). One such function is to let g(?)? = (1 ? ?)/(1 ? ??).
This is the function used in various graph Laplacian formulations with normalized Gaussian kernel as the initial kernel K. For example, see [5]. Our analysis suggests that it is
the dimension reduction effect of this function that is important, rather than the connection to graph Laplacian. As we shall see in the empirical examples, other kernels such as
K 2 , which achieve similar dimension reduction effect (but has nothing to do with graph
Laplacian), also improve performance.
6 Empirical Examples
This section shows empirical examples to demonstrate some consequences of our theoretical analysis. We use the MNIST data set (http://yann.lecun.com/exdb/mnist/), consisting
of hand-written digit images (representing 10 classes, from digit ?0? to digit ?9?). In the
following experiments, we randomly draw m = 2000 samples. We regard n = 100 of
them as labeled data, and the remaining m ? n = 1900 as unlabeled test data.
Normalized 25NN, MNIST
Y avg
K
K23
K4
K
Inverse
100
coefficients
Accuracy: 25NN, MNIST
10
1
0.1
0.9
Y
K43
K2
K
K
1,..1,0,..
Inverse
original K
0.85
0.8
accuracy
1000
0.75
0.7
0.65
0.6
0.01
0.55
0.001
0.5
0 20 40 60 80 100120140160180200
dimension (d)
0
50
100
150
dimension (d)
200
Figure 2: Left: spectral coefficients; right: classification accuracy.
Throughout the experiments, we use the least squares loss: L(p, y) = (p ? y)2 for simplicity. We study the performance of various kernel design methods, by changing the spectral
coefficients of the initial gram matrix K, as in Figure 1. Below
we write ??j for the new
T
? i.e., K
? = Pm ?
spectral coefficient of the new gram matrix K:
i=1 ?i vi vi . We study the
following kernel design methods (also see [2]), with a dimension cut off parameter d, so
that ?
?i = 0 when i > d. (a) [1, . . . , 1, 0, . . . , 0]: ?
?i = 1 if i ? d, and 0 otherwise. This
was used in spectral clustering [3]. (b) K: ?
?i = ?i if i ? d; 0 otherwise. This method is
essentially kernel principal component analysis which keeps the d most significant principal components of K. (c) K p : ?
?i = ?pi if i ? d; 0 otherwise. We set p = 2, 3, 4. This
accelerates the decay of eigenvalues of K. (d) Inverse: ?
?i = 1/(1 ? ??i ) if i ? d; 0
otherwise. ? is a constant close to 1 (we used 0.999). This is essentially graph-Laplacian
based semi-supervised learning for normalized kernel (e.g. see [5]). Note that the standard
graph-Laplacian formulation sets d = m. (e) Y : ?
? i = |Y T vi | if i ? d; 0 otherwise. This is
the oracle kernel that optimizes our generalization bound. The purpose of testing this oracle
method is to validate our analysis by checking whether good kernel in our theory produces
good classification performance on real data. Note that in the experiments, we use averaged
Y over the ten classes. Therefore the resulting kernel will not be the best possible kernel
for each specific class, and thus its performance may not always be optimal.
Figure 2 shows the spectral coefficients of the above mentioned kernel design methods
and the corresponding classification performance. The initial kernel is normalized 25-NN,
which is defined as K = D?1/2 W D?1/2 (see previous section), where Wij = 1 if either
the i-th example is one of the 25 nearest neighbors of the j-th example or vice versa; and
0 otherwise. As expected, the results demonstrate that the target spectral coefficients Y
decay faster than that of the original kernel K. Therefore it is useful to use kernel design
methods that accelerate the eigenvalue decay. The accuracy plot on the right is consistent
with our theory. The near oracle kernel ?Y? performs well especially when the dimension
cut-off is large. With appropriate dimension d, all methods perform better than the supervised base-line (original K) which is below 65%. With appropriate dimension cut-off, all
methods perform similarly (over 80%). However, K p with (p = 2, 3, 4) is less sensitive to
the cut-off dimension d than the kernel principal component dimension reduction method
K. Moreover, the hard threshold method in spectral clustering ([1, . . . , 1, 0, . . . , 0]) is not
stable. Similar behavior can also be observed with other initial kernels. Figure 3 shows
the classification accuracy with the standard Gaussian kernel as the initial kernel K, both
with and without normalization. We also used different bandwidth t to illustrate that the
behavior of different methods are similar with different t (in a reasonable range). The result shows that normalization is not critical for achieving high performance, at least for
this data. Again, we observe that the near oracle method performs extremely well. The
spectral clustering kernel is sensitive to the cut-off dimension, while K p with p = 2, 3, 4
are quite stable. The standard kernel principal component dimension reduction (method
K) performs very well with appropriately chosen dimension cut-off. The experiments are
consistent with our theoretical analysis.
0.85
accuracy
0.8
0.75
0.7
0.65
Accuracy: Gaussian, MNIST
Y
K43
K2
K
K
1,..1,0,..
Inverse
original K
0.9
0.8
0.75
0.7
0.65
0.6
0.6
0.55
0.55
0.5
1,..,1,0,..
K
K23
K4
K
Y
original K
0.85
accuracy
Accuracy: normalized Gaussian, MNIST
0.9
0.5
0
50
100
150
dimension (d)
200
0
50
100
150
dimension (d)
200
Figure 3: Classification accuracy with Gaussian kernel k(i, j) = exp(?||xi ? xj ||22 /t).
Left: normalized Gaussian (t = 0.1); right: unnormalized Gaussian (t = 0.3).
7 Conclusion
We investigated a class of graph-based semi-supervised learning methods. By establishing
a graph-based formulation of kernel learning, we showed that this class of semi-supervised
learning methods is equivalent to supervised kernel learning with unsupervised kernel design (explored in [2]). We then obtained a generalization bound, which implies that the
eigenvalues of the optimal kernel should decay at the same rate of the target spectral coefficients. Moreover, we showed that input noise can cause the target spectral coefficients
to decay faster than the kernel spectral coefficients. The analysis explains why it is often
helpful to modify the original kernel eigenvalues to achieve a dimension reduction effect.
References
[1] Mikhail Belkin and Partha Niyogi. Semi-supervised learning on Riemannian manifolds. Machine Learning, Special Issue on Clustering:209?239, 2004.
[2] Olivier Chapelle, Jason Weston, and Bernhard Sch:olkopf. Cluster kernels for semisupervised learning. In NIPS, 2003.
[3] Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: Analysis
and an algorithm. In NIPS, pages 849?856, 2001.
[4] M. Szummer and T. Jaakkola. Partially labeled classification with Markov random
walks. In NIPS 2001, 2002.
[5] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Schlkopf. Learning with local and
global consistency. In NIPS 2003, pages 321?328, 2004.
[6] Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Semi-supervised learning using
Gaussian fields and harmonic functions. In ICML 2003, 2003.
| 2759 |@word norm:1 decomposition:6 pick:2 tr:2 reduction:7 initial:5 itp:1 com:1 written:1 john:1 wellbehaved:1 designed:1 treating:1 plot:1 implying:1 guess:2 node:2 simpler:1 zhang:1 height:1 constructed:1 direct:1 prove:1 consists:1 x0:8 expected:2 behavior:8 examine:1 nor:1 decomposed:1 decreasing:1 becomes:1 project:3 notation:3 underlying:2 moreover:3 didn:2 bounded:2 what:2 transformation:1 every:1 shed:1 classifier:2 rm:3 k2:4 yn:1 eigenpairs:1 positive:2 understood:1 local:1 modify:2 consequence:4 analyzing:1 establishing:1 equivalence:3 suggests:1 range:4 averaged:1 lecun:1 yj:6 testing:1 practice:4 definite:2 digit:3 j0:5 empirical:3 projection:2 word:2 zoubin:1 unlabeled:10 close:2 operator:2 optimize:1 equivalent:5 center:5 convex:1 simplicity:3 insight:2 regarded:1 justification:1 limiting:2 target:14 exact:1 olivier:1 us:1 designing:1 hypothesis:1 cut:6 labeled:8 observed:4 mentioned:2 intuition:1 complexity:3 depend:1 predictive:4 accelerate:1 various:4 represented:2 separated:1 distinct:2 fast:3 describe:1 effective:1 quite:2 valued:2 solve:1 otherwise:6 niyogi:1 transductive:2 noisy:1 sequence:1 eigenvalue:16 product:2 j2:2 achieve:4 validate:1 olkopf:1 convergence:1 cluster:3 produce:1 converges:8 derive:1 illustrate:2 develop:1 andrew:1 nearest:1 implies:4 direction:1 correct:2 explains:1 f1:1 generalization:7 decompose:1 im:2 exp:1 scope:1 mapping:1 claim:1 purpose:2 label:3 sensitive:2 vice:1 city:1 establishes:2 weighted:1 always:3 gaussian:8 rather:1 pn:1 zhou:1 shrinkage:3 jaakkola:1 focus:2 notational:1 rank:1 helpful:4 nn:3 typically:1 wij:1 interested:4 i1:2 arg:6 issue:4 classification:7 yahoo:1 special:5 field:1 construct:1 having:1 ng:1 kw:1 unsupervised:2 nearly:1 representer:1 minf:2 icml:1 employ:2 few:1 belkin:1 randomly:3 composed:1 replaced:2 consisting:1 replacement:2 n1:2 ando:1 investigate:1 light:1 implication:1 edge:1 orthogonal:3 walk:1 theoretical:9 column:1 earlier:3 zn:7 entry:1 predictor:2 corrupted:5 fundamental:1 off:6 michael:1 connecting:1 together:1 concrete:1 again:1 nm:1 choose:1 possibly:1 slowly:3 subsumes:1 includes:1 coefficient:18 inc:1 depends:3 vi:3 later:2 view:2 jason:1 analyze:1 start:3 partha:1 minimize:1 square:1 ni:1 accuracy:10 variance:2 vp:4 schlkopf:1 proof:1 associated:1 riemannian:1 rational:1 lim:1 knowledge:1 hilbert:2 supervised:42 wei:1 rie:1 formulation:18 generality:1 just:1 implicit:1 hand:2 behaved:1 believe:3 semisupervised:1 effect:7 normalized:8 true:2 remedy:1 verify:1 regularization:1 satisfactory:1 conditionally:1 essence:1 yorktown:1 unnormalized:1 prominent:1 exdb:1 complete:1 demonstrate:4 performs:3 fj:2 image:1 harmonic:1 fi:5 jp:2 belong:1 interpretation:1 significant:1 versa:1 consistency:1 pm:6 similarly:1 chapelle:1 stable:3 base:2 showed:3 optimizes:2 certain:1 watson:1 yi:10 surely:2 semi:33 full:1 faster:7 cross:2 concerning:2 laplacian:7 prediction:1 essentially:3 circumstance:1 expectation:1 kernel:97 normalization:3 addition:2 want:1 ezn:2 singular:1 appropriately:1 sch:1 lafferty:1 jordan:1 integer:2 near:2 easy:1 xj:21 restrict:1 fm:1 bandwidth:1 inner:1 idea:2 whether:2 pca:1 york:1 cause:1 useful:3 clear:2 eigenvectors:2 ten:1 generate:1 http:1 write:1 shall:3 threshold:1 demonstrating:1 achieving:1 drawn:2 k4:2 changing:1 clean:1 utilize:2 graph:36 asymptotically:1 sum:1 inverse:4 family:2 almost:2 reasonable:3 yann:1 throughout:1 draw:2 exx:2 accelerates:1 bound:11 ki:1 oracle:4 infinity:1 x2:1 bousquet:1 min:3 extremely:1 utj:4 kubota:1 kd:1 partitioned:1 vjt:2 previously:1 know:1 observe:2 spectral:42 infm:3 appropriate:3 alternative:2 yair:1 eigen:2 original:9 clustering:6 remaining:3 ghahramani:1 uj:13 establish:1 especially:1 diagonal:1 affinity:1 subspace:1 manifold:1 reason:1 index:1 minimizing:3 difficult:3 potentially:1 design:21 unknown:2 perform:2 observation:2 markov:1 finite:1 precise:1 y1:1 complement:1 connection:1 lal:1 nip:4 able:1 below:3 usually:2 xm:3 appeared:2 belief:1 critical:2 natural:1 regularized:1 predicting:1 innerproduct:1 representing:1 zhu:1 improve:2 xiaojin:1 understanding:2 checking:1 asymptotic:1 loss:3 generation:1 limitation:1 proportional:1 validation:2 consistent:2 pi:1 ibm:1 row:1 free:1 neighbor:1 mikhail:1 benefit:1 regard:2 dimension:21 xn:1 gram:4 valid:1 avg:1 sj:8 approximate:4 bernhard:1 keep:1 global:1 xi:25 un:1 continuous:1 why:6 investigated:1 vj:11 did:1 main:1 noise:11 nothing:1 x1:5 ny:2 tong:1 explicit:1 theorem:13 specific:2 explored:1 decay:18 mnist:6 effectively:1 justifies:1 expressed:1 partially:1 corresponds:1 weston:2 goal:1 identity:1 replace:2 feasible:1 change:2 hard:1 included:1 infinite:1 uniformly:2 reducing:1 justify:2 wt:3 eigenpair:1 principal:6 called:2 szummer:1 phenomenon:1 ex:5 |
1,937 | 276 | Non-Boltzmann Dynamics in Networks of Spiking Neurons
Non-Boltzmann Dynamics in Networks of
Spiking Neurons
Michael C. Crair and William Bialek
Department of Physics, and
Department of Molecular and Cell Biology
University of California at Berkeley
Berkeley, CA 94720
ABSTRACT
We study networks of spiking neurons in which spikes are fired as
a Poisson process. The state of a cell is determined by the instantaneous firing rate, and in the limit of high firing rates our model
reduces to that studied by Hopfield. We find that the inclusion
of spiking results in several new features, such as a noise-induced
asymmetry between "on" and "off" states of the cells and probability currents which destroy the usual description of network dynamics in terms of energy surfaces. Taking account of spikes also allows us to calibrate network parameters such as "synaptic weights"
against experiments on real synapses. Realistic forms of the post
synaptic response alters the network dynamics, which suggests a
novel dynamical learning mechanism.
1
INTRODUCTION
In 1943 McCulloch and Pitts introduced the concept of two-state (binary) neurons
as elementary building blocks for neural computation. They showed that essentially
any finite calculation can be done using these simple devices. Two-state neurons are
of questionable biological relevance, yet much of the subsequent work on modeling of
neural networks has been based on McCulloch-Pitts type neurons because the twostate simplification makes analytic theories more tractable. Hopfield (1982, 1984)
109
110
Crair and Bialek
showed that an asynchronous model of symmetrically connected two-state neurons
was equivalent to Monte-Carlo dynamics on an 'energy' surface at zero temperature.
The idea that the computational abilities of a neural network can be understood
from the structure of an effective energy surface has been the central theme in much
recent work.
In an effort to understand the effects of noise, Amit, Gutfreund and Sompolinsky
(Amit et aI., 1985a; 1985b) assumed that Hopfield's 'energy' could be elevated to
an energy in the statistical mechanics sense, and solved the Hopfield model at finite
temperature. The problem is that the noise introduced in equilibrium statistical
mechanics is of a very special form, and it is not clear that the stochastic properties
of real neurons are captured by postulating a Boltzmann distribution on the energy
surface.
Here we try to do a slightly more realistic calculation, describing interactions among
neurons through action potentials which are fired according to probabilistic rules.
We view such calculations as intermediate between the purely phenomenological
treatment of neural noise by Amit et aI. and a fully microscopic description of
neural dynamics in terms of ion channels and their associated noise. We find that
even our limited attempt at biological realism results in some interesting deviations
from previous ideas on network dynamics.
2
THE MODEL
We consider a model where neurons have a continuous firing rate, but the generation
of action potentials is a Poisson process. This mean~ that the "state" of each cell i
is described by the instantaneous rate Ti(t), and the probability that this cell will
fire in a time interval [t, t + dt] is given by Ti(t)dt. Evidence for the near-Poisson
character of neuronal firing can be found in the mammalian auditory nerve (Siebert,
1965; 1968), and retinal ganglion cells (Teich et al., 1978, Teich and Saleh, 1981).
To stay as close as possible to existing models, we assume that the rate T( t) of a
neuron is a sigmoid function, g(x) = 1/(1 +e- Z ), of the total input x to the neuron.
The input is assumed to be a weighted sum of the spikes received from all other
neurons, so that
r,(t)
= rmY [~~ J,;!(t -
til -
e,] .
(1)
Jii is the matrix of connection strengths between neurons, Tm is the maximum
spike rate of the neuron, and 0i is the neuronal threshold. J(t) is a time weighting
function, corresponding schematically to the time course of post-synaptic currents
injected by a pre-synaptic spike; a good first order approximation for this function
is J(t) -- e- t / r , but we also consider functions with more than one time constant.
(Aidley, 1980, Fetz and Gustafsson, 1983).
tn,
as an approxWe can think of the spike train from the itA neuron, Ep .5(t imation to the true firing rate Ti(t); of course this approximation improves as the
Non-Boltzmann Dynamics in Networks of Spiking Neurons
spikes come closer together at high firing rates. If we write
L <5(t - tn = ri(t) + 7]i(t)
(2)
IJ
we have defined the noise TJi in the spike train. The equations of motion for the
rates then become
(3)
where Ni(t) = L:j Jij7]j(t) and f 0 rj(t) is the convolution of f(t) with the spike
rate rj(t). The statistics of the fluctuations in the spike rate 7]j(t) are (7]j(t? =
0, (7]i(t)7]j(t'? <5ij(t - t')rj(t).
=
3
DYNAMICS
If the post-synaptic response f(t) is exactly exponential, we can invert Eq. (3)
to obtain a first order equation for the normalized spike rate Yi(t)
ri{t)/rm.
More precise descriptions of the post-synaptic response will yield higher order time
derivatives with coefficients that depend on the relative time constants in f(t). vVe
will comment later on the relevance of these higher order terms, but consider first
the lowest order description. By inverting Eq. (3) we obtain a stochastic differential
equation analogous to the Langevin equation describing Brownian motion:
=
dg-1(Yd __ dE
dt
-
dYi
N.()
+ ?t
,
(4a)
where the deterministic forces are given by
(4b)
Note that Eq. (4) is nearly equivalent to the "charging equation" Hopfield (1984)
assumed in his discussion of continuous neurons, except we have explicitly included
the noise from the spikes. This system is precisely equivalent to the Hopfield twostate model in the limit of large spike rate (rm T =:} 00, J ii = constant), and no
noise. In a thermodynamic system near equilibrium, the noise "force" Ni (t) is
related to the friction coefficient via the fluctuation dissipation theorem. In this
system however, there is no analogous relationship.
A standard transformation, analogous to deriving Einstein's diffusion equation from
the Langevin equation (Stratonovich, 1963, 1967), yields a probabilistic description
for the evolution of the neural system, a form of Fokker-Planck equation for the time
evolution of P( {y;}), the probability that the network is in a state described by the
normalized rates {y;}; we write the Fokker-Planck equation below for a simple case.
111
112
Crair and Bialek
A useful interpretation to consider is that the system, starting in a non-equilibrium
state, diffuses or evolves in phase space, to a final stationary state.
We can make our description of the post-synaptic response f(t) more accurate
by including two (or more) exponential time constants, corresponding roughly to
the rise and fall time of the post synaptic potential. This inclusion necessitates
the addition of a second order term in the Langevin equation (Eq. 4). This is
analogous to including an inertial term in a diffusive description, so that the system
is no longer purely dissipative. This additional complication has some interesting
consequences. Adjusting the relative length of the rise to fall time of the post
synaptic potential effects the rate of relaxation to local equilibrium of the system.
In order to perform most efficaciously as an associative memory, a neural system
will "choose" critical damping time constants, so that relaxation is fastest. Thus,
by adjusting the time course of the post synaptic potential, the system can "learn"
of a local stationary state, without adjusting the synaptic strengths. This novel
learning mechanism could be a form of fine tuning of already established memories,
or could be a unique form of dynamical short-term memory.
4
QUALITATIVE RESULTS
In order to understand the dynamics of our Fokker-Planck equation, we begin by
considering the case of two neurons interacting with each other. There are two lim4/rm T), then the
iting behaviors. If the neurons are weakly coupled (J < Je , J e
only stable state of the system is with both neurons firing at a mean firing rate, rm.
If the neurons are strongly (and positively) coupled (J > J e ), then isolated basins
of attraction, or stationary states are formed, one stationary state corr..:sponding
to both neurons being active, the other state has both neurons relatively (but not
absolutely) quiescent. In the strong coupling limit, one can reduce the problem
to motion along the a collective coordinate connecting the two stable states. The
resulting one dimensional Fokker-Planck equation is
=
a P(y, t)
at
= aya
[ U'(y)P(y, t)
!
a T(y)P(y, t)1,
+ ay
(5)
where U(y) is an effective potential energy,
U'( Y)
= y(l -
1
1
( y - -)
1
y) [9- (y) - -rmJ
2
T
2
1 2 rmy(3 + -J
4
5y)],
(6)
=
and T(y) is a spatially varying effective temperature, T(y)
~J2rmy3(1 _ y)2.
One can solve to find the size of the stable regions, and the stationary probability
distribution,
?
-
B
[(IJ,
P (y) - T(y) exp -
1
U'(y)
T(y) dy .
(7)
We have done numerical simulations which confirm the qualitative predictions of the
one dimensional Fokker-Planck equation. This analysis shows that the non-uniform
Non-Boltzmann Dynamics in Networks of Spiking Neurons
and asymmetric temperature distribution alters the relative stability of the stable
states, in the favor of the 'off' state. This effect does have some biological pertinence,
as it is well known that on average neurons are more likely to be quiescent then
active. In our model the asymmetry is a direct consequence of the Poisson nature
of the neuronal firing.
Probability Current
?
...
?
'"
i
I
-r
o
II
2
? ?
!
?
?
?
10
12
I
14
rX ...
Figure 1: Probability current in the stationary state for two neurons that are
strongly interacting. Computed as a ratio of the number of excess excursions in
one dire<:tion to the total number of excursions, in percent. In thermodynamic
equillibrium, detailed balance would force the current to be zero. Shown as a
function of the number of spikes in an e-folding time of the post-synaptic response.
There are further surprises to be found in the simple two neuron model. Since the
interaction between the neurons is not time reversal invariant, detailed balance is
not maintained in the system. Thus, even the stationary probability distribution
has non-zero probability current, so that the system tends to cycle probabilistically
through state space. The presence of the current further alters the relative probability of the two stable states, as confirmed by numerical simulations, and renders
the application of equilibrium statistical mechanics inappropriate.
Simulations also confirm (Fig. 1) that the probability current falls off with increasing maximum spike rate (rmT), because the effective noise is suppressed when the
spike rate is high. However, at biologically reasonable spike rates (rm - 150s- 1 ),
the probability current is significant. These currents destroy any sense of a global
113
114
Crair and Bialek
energy function or thermodynamic temperature.
One advantage of treating spikes explicitly is that we can relate the abstract synaptic
strength J to observable parameters. In Fig. 2 we compare J with the experimentally accessible spike number to spike number transfer across the synapse, for a two
neuron system. Note that critical coupling (see above) corresponds to a rather large
value of,...- 4/5 th of a spike emitted per spike received.
Spikes Generated per Spike Input
. ----------------------------.'
o
.
0
,
~~
i
e
e
I.
0.0
D.5
1.0
1.5
2.0
2.5
Figure 2: Single neuron spike response to the receipt of a spike from a coupled
neuron. Since response is probabilistic, fractional spikes are relevant. Computed as
a function of J /Jcritical, where Jcritical is the minimum synaptic strength necessary
for isolated basins of attraction.
Many of the simple ideas we have introduced for the two neuron system carryover
to the multi-neuron case. If the matrix of connection strengths obeys the "Hebb"
rule (often used to model associative memory),
(8)
then a stability analysis yields the same critical value for the connection strength J
(note that we have scaled by N, and the sum on 11 runs from 1 to p, the number of
memories to be stored). Calculation of the spike-out/spike-in ratio for the multineuron system at critical coupling shows that it scales like (a/N)t, where p aN.
=
Non-Boltzmann Dynamics in Networks of Spiking Neurons
Since most neural systems naturally have a small spike-out/spike-in ratio, this (together with Fig. 2) suggests that small networks will have to be strongly driven in
order to achieve isolated basins of attraction for "memories;" this is in agreement
with the one available experiment (Kleinfeld et aI., 1990). In contrast, large networks achieve criticality with more natural spike to spike ratios. For instance, if a
network of 10 4 - 10 5 connected neurons is to have multiple stable "memory" states
as in the original Hopfield model, we predict that a neuron needs to receive 100500 contiguous action potentials to stimulate the emission of its own spike. This
prediction agrees with experiments done on the hippocampus (McNaughton et al.,
1981), where about 400 convergent inputs are needed to discharge a granule cell.
5
CONCLUSIONS
To conclude, we will just summarize our major points:
? Spike noise generated by the Poisson firing of neurons breaks the symmetry
between on/off states, in favor of the "off" state.
? State dependent spike noise also destroys any sense of a global energy function, let alone a thermodynamic 'temperature'. This makes us suspicious of
attempts to apply standard techniques of statistical mechanics.
? By explicitly modeling the interaction of neurons via spikes, we have direct
access to experiments which can guide, and be guided by our theory. Specifically, our theory predicts that for a given connection strength between neurons, larger net Norks of neurons will function as memories at naturally small
spike-input to spike-output ratios.
? More realistic forms of post synaptic response to the receipt of action potentials alters the network dynamics. By adjusting the relative rise and fall time
of the post-synaptic potential, the network speeds the relaxation ,to the local
stable state. This implies that more efficacious memories, or "learning", can
result without altering the strength of the synaptic weights.
Finally, we comment on the dynamics of networks in the N -+ 00 limit. \Ve might
imagine that some of the complexities we find in the two-neuron case would go away,
in particular the probability currents. We have been able to prove that this does not
happen in any rigorous sense for realistic forms of spike noise, although in practice
the currents may become small. The function of the network as a memory (for
example) would then depend on a clean separation of time scales between relaxation
into a single basin of attraction and noise-driven transitions to neighboring basins.
Arranging for this separation of time scales requires some constraints on synaptic
connectivity and firing rates which might be testable in experiments on real circuits.
115
116
Crair and Bialek
References
D. J. Aidley (1980), Physiology of Excitable Cells, 2nd Edition, Cambridge University Press, Cambridge.
D. J. Amit, H. Gutfreund and H. Sompolinsky (1985a), Phys. Rev. A, 2, 1007-1018.
D. J. Amit, H. Gutfreund and H. Sompolinsky (1985b), Phys. Rev. Lett., 55,
1530-1533.
E. E. Fetz and B. Gustafsson (1983), J. Physiol., 341, 387.
J. J. Hopfield (1982), Proc. Nat. Acad. Sci. USA, 79,2554-2558.
J. J. Hopfield (1984), Proc. Nat. Acad. Sci. USA, 81,3088-3092.
D. Kleinfeld, F. Raccuia-Behling, and H. J. Chiel (1990), Biophysical Journal, in
press.
W. S. McCulloch and W. Pitts (1943), Bull. of Math. Biophys., 5, 115-133.
B. L. McNaughton, C. A. Barnes and P. Anderson (1981), J. Neurophysiol. 46,
952-966.
W. M. Siebert (1965), Kybernetik, 2, 206.
W. M. Siebert (1968) in Recognizing Patterns, p104, P.A. Kohlers and
Eds., MIT Press, Cambridge.
~L
Eden,
R. 1. Stratonovich (1963,1967), Topics in the Theory oj Random Noise, Vol. I and
II, Gordon & Breach, New York.
M. C. Teich, L. Martin and B.1. Cantor (1978), J. Opt. Soc. Am., 68, 386.
M. C. Teich and B.E.A. Saleh (1981), J. Opt. Soc. Am.,71, 771.
| 276 |@word hippocampus:1 nd:1 simulation:3 teich:4 existing:1 current:12 yet:1 physiol:1 subsequent:1 numerical:2 multineuron:1 realistic:4 happen:1 analytic:1 treating:1 stationary:7 alone:1 device:1 twostate:2 realism:1 short:1 math:1 complication:1 along:1 direct:2 become:2 differential:1 gustafsson:2 qualitative:2 suspicious:1 prove:1 behavior:1 roughly:1 mechanic:4 multi:1 inappropriate:1 considering:1 increasing:1 begin:1 circuit:1 mcculloch:3 lowest:1 gutfreund:3 transformation:1 berkeley:2 ti:3 questionable:1 exactly:1 rm:5 scaled:1 planck:5 understood:1 local:3 tends:1 limit:4 consequence:2 acad:2 kybernetik:1 firing:11 fluctuation:2 yd:1 might:2 studied:1 dissipative:1 suggests:2 fastest:1 limited:1 obeys:1 unique:1 practice:1 block:1 physiology:1 pre:1 close:1 equivalent:3 deterministic:1 go:1 starting:1 rule:2 attraction:4 deriving:1 his:1 stability:2 coordinate:1 mcnaughton:2 analogous:4 discharge:1 arranging:1 imagine:1 chiel:1 agreement:1 mammalian:1 asymmetric:1 predicts:1 ep:1 solved:1 region:1 connected:2 cycle:1 sompolinsky:3 complexity:1 dynamic:14 depend:2 weakly:1 purely:2 neurophysiol:1 necessitates:1 hopfield:9 train:2 effective:4 monte:1 larger:1 solve:1 ability:1 statistic:1 favor:2 think:1 final:1 associative:2 advantage:1 biophysical:1 net:1 interaction:3 neighboring:1 relevant:1 fired:2 achieve:2 description:7 asymmetry:2 coupling:3 ij:3 received:2 eq:4 strong:1 soc:2 come:1 implies:1 guided:1 tji:1 stochastic:2 opt:2 biological:3 elementary:1 exp:1 equilibrium:5 predict:1 pitt:3 major:1 proc:2 agrees:1 weighted:1 mit:1 destroys:1 rather:1 varying:1 probabilistically:1 emission:1 contrast:1 rigorous:1 sense:4 am:2 dependent:1 diffuses:1 among:1 special:1 biology:1 nearly:1 carryover:1 gordon:1 dg:1 ve:1 phase:1 fire:1 william:1 attempt:2 dyi:1 accurate:1 closer:1 necessary:1 damping:1 isolated:3 instance:1 modeling:2 contiguous:1 altering:1 calibrate:1 bull:1 deviation:1 uniform:1 recognizing:1 stored:1 accessible:1 stay:1 probabilistic:3 physic:1 off:5 michael:1 together:2 connecting:1 connectivity:1 central:1 choose:1 receipt:2 derivative:1 til:1 account:1 potential:9 jii:1 de:1 retinal:1 coefficient:2 explicitly:3 later:1 try:1 view:1 tion:1 break:1 formed:1 ni:2 yield:3 carlo:1 rx:1 confirmed:1 synapsis:1 phys:2 synaptic:18 ed:1 against:1 energy:9 naturally:2 associated:1 auditory:1 treatment:1 adjusting:4 fractional:1 improves:1 inertial:1 nerve:1 higher:2 dt:3 response:8 synapse:1 done:3 strongly:3 anderson:1 just:1 kleinfeld:2 stimulate:1 usa:2 effect:3 building:1 concept:1 true:1 normalized:2 evolution:2 spatially:1 maintained:1 ay:1 tn:2 motion:3 temperature:6 dissipation:1 percent:1 instantaneous:2 novel:2 sigmoid:1 spiking:7 elevated:1 interpretation:1 significant:1 cambridge:3 ai:3 tuning:1 inclusion:2 phenomenological:1 stable:7 access:1 longer:1 surface:4 brownian:1 own:1 showed:2 recent:1 cantor:1 driven:2 binary:1 yi:1 captured:1 minimum:1 additional:1 ii:3 thermodynamic:4 multiple:1 rj:3 reduces:1 calculation:4 crair:5 molecular:1 post:11 prediction:2 essentially:1 poisson:5 sponding:1 invert:1 cell:8 ion:1 receive:1 diffusive:1 folding:1 fine:1 rmj:1 interval:1 schematically:1 addition:1 comment:2 induced:1 emitted:1 near:2 symmetrically:1 presence:1 intermediate:1 reduce:1 idea:3 tm:1 effort:1 render:1 york:1 action:4 useful:1 clear:1 detailed:2 alters:4 per:2 write:2 vol:1 threshold:1 eden:1 clean:1 diffusion:1 destroy:2 relaxation:4 sum:2 run:1 injected:1 reasonable:1 aya:1 excursion:2 separation:2 dy:1 simplification:1 convergent:1 barnes:1 strength:8 precisely:1 constraint:1 ri:2 speed:1 friction:1 aidley:2 relatively:1 martin:1 department:2 according:1 across:1 slightly:1 character:1 suppressed:1 evolves:1 biologically:1 rev:2 invariant:1 equation:13 describing:2 mechanism:2 needed:1 imation:1 tractable:1 stratonovich:2 reversal:1 available:1 apply:1 einstein:1 away:1 original:1 testable:1 amit:5 granule:1 already:1 spike:40 usual:1 bialek:5 microscopic:1 pertinence:1 sci:2 topic:1 length:1 relationship:1 ratio:5 balance:2 relate:1 dire:1 rise:3 collective:1 boltzmann:6 perform:1 neuron:42 convolution:1 finite:2 langevin:3 criticality:1 precise:1 interacting:2 introduced:3 inverting:1 connection:4 california:1 established:1 able:1 dynamical:2 below:1 pattern:1 summarize:1 including:2 memory:10 oj:1 charging:1 critical:4 natural:1 force:3 excitable:1 coupled:3 breach:1 relative:5 fully:1 interesting:2 generation:1 ita:1 basin:5 course:3 asynchronous:1 guide:1 understand:2 fetz:2 fall:4 taking:1 lett:1 transition:1 excess:1 observable:1 confirm:2 global:2 active:2 assumed:3 quiescent:2 conclude:1 continuous:2 channel:1 learn:1 nature:1 ca:1 transfer:1 symmetry:1 noise:15 edition:1 positively:1 vve:1 neuronal:3 je:1 fig:3 postulating:1 hebb:1 theme:1 exponential:2 weighting:1 theorem:1 evidence:1 corr:1 nat:2 biophys:1 surprise:1 likely:1 ganglion:1 fokker:5 corresponds:1 saleh:2 experimentally:1 included:1 determined:1 except:1 specifically:1 total:2 relevance:2 absolutely:1 |
1,938 | 2,760 | Q-Clustering
Mukund Narasimhan?
Nebojsa Jojic?
Jeff Bilmes?
?
Dept of Electrical Engineering, University of Washington, Seattle WA
?
Microsoft Research, Microsoft Corporation, Redmond WA
{mukundn,bilmes}@ee.washington.edu and [email protected]
Abstract
We show that Queyranne?s algorithm for minimizing symmetric submodular functions can be used for clustering with a variety of different objective functions. Two specific criteria that we consider in this paper are the
single linkage and the minimum description length criteria. The first criterion tries to maximize the minimum distance between elements of different clusters, and is inherently ?discriminative?. It is known that optimal clusterings into k clusters for any given k in polynomial time for this
criterion can be computed. The second criterion seeks to minimize the
description length of the clusters given a probabilistic generative model.
We show that the optimal partitioning into 2 clusters, and approximate
partitioning (guaranteed to be within a factor of 2 of the the optimal) for
more clusters can be computed. To the best of our knowledge, this is
the first time that a tractable algorithm for finding the optimal clustering
with respect to the MDL criterion for 2 clusters has been given. Besides
the optimality result for the MDL criterion, the chief contribution of this
paper is to show that the same algorithm can be used to optimize a broad
class of criteria, and hence can be used for many application specific
criterion for which efficient algorithm are not known.
1 Introduction
The clustering of data is a problem found in many pattern recognition tasks, often in the
guises of unsupervised learning, vector quantization, dimensionality reduction, etc. Formally, the clustering problem can be described as follows. Given a finite set S, and a criterion function Jk defined on all partitions of S into k parts, find a partition of S into k parts
{S1 , S2 , . . . , Sk } so that Jk ({S1 , S2 , . . . , Sk }) is maximized. The number of k-clusters
for a size n > k data set is roughly k n /k! [5] so exhaustive search is not an efficient solution. The problem, in fact, is NP-complete for most desirable measures. Broadly speaking
there are two classes of criteria for clustering. There are distance based criteria, for which
a distance measure is specified between each pair of elements, and the criterion somehow
combines either intercluster or intracluster distances into an objective function. The other
class of criteria are model based, and for these, a probabilistic (generative) model is specified. There is no universally accepted criterion for clustering. The appropriate criterion is
typically application dependent, and therefore, we do not claim that the two criteria considered in this paper are inherently better or more generally applicable than other criteria.
However, we can show that for the single-linkage criterion, we can compute the optimal
clustering into k parts (for any k), and for the MDL criterion, we can compute the optimal
clustering into 2 parts using Queyranne?s algorithm. More generally, any criterion from a
broad class of criterion can be solved by the same algorithm, and this class of criteria is
closed under linear combinations. In addition to the theoretical elegance of a single algorithm solving a number of very different criterion, this means that we can optimize (for
example) for the sum of single-linkage and MDL criterions (or positively scaled versions
thereof). The two criterion we consider are quite different. The first, ?discriminative?,
criterion we consider is the single-linkage criterion. In this case, we are given distances
d(s1 , s2 ) between all elements s1 , s2 ? S, and we try and find clusters that maximize the
minimum distance between elements of different clusters (i.e., maximize the separation of
the clusters). This criterion has several advantages. Since we are only comparing distances,
the distance measure can be chosen from any ordered set (addition/squaring/multiplication
of distances need not be defined as is required for K-means, spectral clustering etc.). Further, this criterion only depends on the rank ordering of the distances, and so is completely
insensitive to any monotone transformation of the distances. This gives a lot of flexibility
in constructing a distance measure appropriate for an application. For example, it is a very
natural candidate when the distance measure is derived from user studies (since users are
more likely to be able to provide rankings than exact distances). On the other hand, this
criterion is sensitive to outliers and may not be appropriate when there are a large number
of outliers in the data set. The kernel based criterion considered in [3] is similar in spirit
to this one. However, their algorithm only provides approximate solutions, and the extension to more than 2 clusters is not given. However, since they optimize the distance of the
clusters to a hyperplane, it is more appropriate if the clusters are to be classified using a
SVM.
The second criterion we consider is ?generative? in nature and is based on the Minimum
Description Length principle. In this case we are given a (generative) probability model
for the elements, and we attempt to find clusters so that describing or encoding the clusters
(separately) can be done using as few bits as possible. This is also a very natural criterion grouping together data items that can be highly compressed translates to grouping elements
that share common characteristics. This criterion has also been widely used in the past,
though the algorithms given do not guarantee optimal solutions (even for 2 clusters).
Since these criteria seem quite different in nature, it is surprising that the same algorithm
can be used to find the optimal partitions into two clusters in both cases. The key principle
here is the notion of submodularity (and its variants) [1, 2]. We will show that the problem
of finding the optimal clusterings minimizing the description length is equivalent to the
problem of minimizing a symmetric submodular function, and the problem of maximizing
the cluster separation is equivalent to minimizing a symmetric function which, while not
submodular, is closely related, and can be minimized by the same algorithm.
2 Background and Notation
A clustering of a finite set S is a partition {S1 , S2 , . . . , Sk } of S. We will call the individual elements of the partition the clusters of the partition. If there are k clusters in
the partition, then we say that the partition is a k-clustering. Let Ck (S) be the set of all
k-clusterings for 1 ? k ? |S|. For the first criterion, we assume we are given a function
d : S ? S ? R that represents the ?distance? between objects. Intuitively, we expect that
d(s, t) is large when the objects are dissimilar. We will assume that d(?, ?) is symmetric,
but make no further assumptions. In particular we do not assume that d(?, ?) is a metric
(Later on in this paper, we will not even assume that d(s, t) is a (real) number, but instead
will allow the range of d to be a ordered set ). The distance between sets T and R is
often defined to be the smallest distance between elements from these different clusters:
D(R, T ) = minr?R,t?T d(r, t). The single-linkage criterion tries to maximize this distance, and hence an optimal 2-clustering is in arg max{S1 ,S2 }?C2 (S) D(S1 , S2 ). We let
Ok (S) be the set of all optimal k-clusterings for 1 ? k ? |S| with respect to D(?, ?). It is
known that an algorithm based on the Minimum Spanning Tree can be used to find optimal
clusterings for the single-linkage criterion[8].
For the second criterion, we assume S is a collection of random variables, and for any subset T = {s1 , s2 , . . . , sm } of S, we let H(T ) be the entropy of the set of random variables
{s1 , s2 , . . . , sm }. Now, the (expected) total cost of encoding or describing the set T is
H(T ). So a partition {S1 , S2 } of S that minimizes the description length (DL) is in
arg min
DL(S1 , S2 ) =
{S1 ,S2 }?C2 (S)
arg min
H(S1 ) + H(S2 )
{S1 ,S2 }?C2 (S)
We will denote by 2S the set of all subsets of S. A set function f : 2S ? R assigns
a (real) number to every subset of S. We say that f is submodular if f (A) + f (B) ?
f (A ? B) + f (A ? B) for every A, B ? S. f is symmetric if f (A) = f (S \ A). In
[1], Queyranne gives a polynomial time algorithm that finds a set A ? 2S \ {S, ?} that
minimizes any symmetric submodular set function (specified in the form of an oracle). That
is, Queyranne?s algorithm finds a non-trivial partition {S1 , S \ S1 } of S so that f (S1 ) (=
f (S \ S1 )) minimizes f over all non-trivial subsets of S. The problem of finding non-trivial
minimizers of a symmetric submodular function can be thought of a a generalization of the
graph-cut problem. For a symmetric set function f , we can think of f (S1 ) as f (S1 , S \ S1 ),
and if we can extend f to be defined on all pairs of disjoint subsets of S, then Rizzi showed
in [2] that Queyranne?s algorithm works even when f is not submodular, as long as f
is monotone and consistent, where f is monotone if for R, T, T ? ? S with T ? ? T and
R?T = ? we have f (R, T ? ) ? f (R, T ) and f is consistent if f (A, W ?B) ? f (B, A?W )
whenever A, B, W ? S are disjoint sets satisfying f (A, W ) ? f (B, W ).
The rest of this paper is organized as follows. In Section 3, we show that Queyranne?s
algorithm can be used to find the optimal k-clustering (for any k) in polynomial time for
the single-linkage criterion. In Section 4, we give an algorithm for finding the optimal
clustering into 2 parts that minimizes the description length. In Section 5, we present some
experimental results.
3 Single-Linkage: Maximizing the separation between clusters
In this section, we show that Queyranne?s algorithm can be used for finding k-clusters (for
any given k) that maximize the separation between elements of different clusters. We do
this in two steps. First in Subsection 3.1, we show that Queyranne?s algorithm can partition
the set S into two parts to maximize the distance between these parts in polynomial time.
Then in Subsection 3.2, we show how this subroutine can be used to find optimal k clusters,
also in polynomial time.
3.1 Optimal 2-clusterings
In this section, we will show that the function ?D(?, ?) is monotone and consistent. Therefore, by Rizzi?s result, it follows that we can find a 2-clustering {S1 , S2 } = {S1 , S \ S1 }
that minimizes ?D(S1 , S2 ), and hence maximizes D(S1 , S2 ).
Lemma 1. If R ? T , then D(U, T ) ? D(U, R) (and hence ?D(U, R) ? ?D(U, T )).
This would imply that ?D is monotone.
To see this, observe that
D(U, T ) =
min d(u, t) = min
u?U,t?T
min d(u, r),
u?U,r?R
min
u?U,t?T \R
d(u, t)
? D(U, R)
Lemma 2. Suppose that A, B, W are disjoint subsets of S and D(A, W ) ? D(B, W ).
Then D(A, W ? B) ? D(B, A ? W ).
To see this first observe that D(A, B ? W ) = min(D(A, B), D(A, W )) because
D(A, W ? B) =
min
D(a, x) = min
min D(a, w), min D(A, b)
a?A,x?W ?B
a?A,w?W
a?A,b?B
It follows that D(A, B ? W ) = min (D(A, B), D(A, W )) ? min (D(A, B), D(B, W ))
= min (D(B, A), D(B, W )) = D(B, A ? W ). Therefore, if ?D(A, W ) ? ?D(B, W ),
then ?D(A, W ? B) ? ?D(B, A ? W ). Hence ?D(?, ?) is consistent.
Therefore, ?D(?, ?) is symmetric, monotone and consistent. Hence it can be minimized
using Queyranne?s algorithm [2]. Therefore, we have a procedure to compute optimal 2clusterings. We now extend this to compute optimal k-clusterings.
3.2 Optimal k-clusterings
We start off by extending our objective function for k-clusterings in the obvious way. The
function D(R, T ) can be thought of as defining the separation or margin between the
clusters R and T . We can generalize this notion to more than two clusters as follows. Let
seperation({S1 , S2 , . . . , Sk }) = min D(Si , Sj ) =
i6=j
min
Si 6=Sj
d(si , sj )
si ?Si ,sj ?Sj
Note that seperation({R, T }) = D(R, T ) for a 2-clustering. The function seperation :
|S|
?k=1 Ck (S) ? R takes a single clustering as its argument. However, D(?, ?) takes two
disjoint subsets of S as its arguments the union of which need not be S in general. The
margin is the distance between the closest elements of different clusters, and hence we will
be interested in finding k-clusters that maximize the margin. Therefore, we seek an element
in Ok (S) = arg max{S1 ,S2 ,...,Sk }?Ck (S) seperation({S1 , S2 , . . . , Sk }). Let vk (S) be the
margin of an element in Ok (S). Therefore, vk (S) is the best possible margin of any kclustering of S. An obvious approach to generating optimal k-clusterings given a method
of generating optimal 2-clusterings is the following. Start off with an optimal 2-clustering
{S1 , S2 }. Then apply the procedure to find 2-clusterings of S1 and S2 , and stop when you
have enough clusters. There are two potential problems with this approach. First, it is not
clear that an optimal k-clustering can be a refinement of an optimal 2-clustering. That is,
we need to be sure that there is an optimal k-clustering in which S1 is the union of some
of the clusters, and S2 is the union of the remaining. Second, we need to figure out how
many of the clusters S1 is the union of and how many S2 is the union of. In this section, we
will show that for any k ? 3, there is always an optimal k-clustering that is a refinement
of any given optimal 2-clustering. A simple dynamic programming algorithm takes care of
the second potential problem.
We begin by establishing some relationships between the separation of clusterings of different sizes. To compare the separation of clusterings with different number of clusters, we can try and merge two of the clusters from the clustering with more clusters. Say that S = {S1 , S2 , . . . , Sk } ? Ck (S) is any k-clustering of S, and S ? is a
(k ? 1)-clustering of S obtained by merging two of the clusters (say S1 and S2 ). Then
S ? = {S1 ? S2 , S3 , . . . , Sk } ? Ck?1 (S).
Lemma 3. Suppose that S
= {S1 , S2 , . . . , Sk } ? Ck (S) and S ? =
{S1 ? S2 , S3 , . . . , Sk } ? Ck?1 (S). Then seperation(S) ? seperation(S ? ). In other words,
refining a partition can only reduce the margin.
Therefore, refining a clustering (i.e., splitting a cluster) can only reduce the separation. An
immediate corollary is the following.
Corollary 4. If Tl ? Cl (S) is a refinement of Tk ? Ck (S) (for k < l) then seperation(Tl ) ?
seperation(Tk ). It follows that vk (S) ? vl (S) if 1 ? k < l ? n.
Proof. It suffices to prove the result for k = l ? 1. The first assertion follows immediately
from Lemma 3. Let S ? Ol (S) be an optimal l-clustering. Merge any two clusters to get
S ? ? Ck (S). By Lemma 3, vk (S) ? seperation(S ? ) ? seperation(S) = vl (S).
Next, we consider the question of constructing larger partitions (i.e., partitions with more
clusters) from smaller partitions. Given two clusterings S = {S1 , S2 , . . . , Sk } ? Ck (S)
and T = {T1 , T2 , . . . , Tl } ? Cl (S) of S, we can create a new clustering U =
{U1 , U2 , . . . , Um } ? Cm (S) to be their common refinement. That is, the clusters of U
consist of those elements that are in the same clusters of both S and T . Formally,
U = { Si ? Tj : 1 ? i ? k, 1 ? j ? l}
Lemma 5. Let S = {S1 , S2 , . . . , Sk } ? Ck (S) and T = {T1 , T2 , . . . , Tl } ? Cl (S) be any
two partitions. Let U = {U1 , U2 , . . . , Um } ? Cm (S) be their common refinement. Then
seperation(U) = min (seperation(S), seperation(T )).
Proof. It is clear that seperation(U) ? min (seperation(S), seperation(T )). To show
equality, note that if a, b are in different clusters of U, then a, b must have been in different clusters of either S or T .
This result can be thought of as expressing a relationship between seperation and the lattice
of partitions of S which will be important to our later robustness extension
Lemma 6. Suppose that S = {S1 , S2 } ? O2 (S) is an optimal 2-clustering. Then there is
always an optimal k-clustering that is a refinement of S.
Proof. Suppose that this is not the case. If T = {T1 , T2 , . . . , Tk } ? Ok (S) is an optimal k-clustering, let r be the number of clusters of T that ?do not respect? the partition {S1 , S2 }. That is, r is the number of clusters of T that intersect both S1 and
S2 : r = |{ 1 ? i ? k : Ti ? S1 6= ? and Ti ? S2 6= ?}|. Pick T ? Ok (S) to have the
smallest r. If r = 0, then T is a refinement of S and there is nothing to show. Other(1)
(2)
wise, r ? 1.n Assume WLOG that T1 o = T1 ? S1 6= ? and T1 = T1 ? S2 6= ?.
(1)
Then T ? =
(2)
T 1 , T 1 , T 2 , T 3 , . . . , Tk
? Ck+1 (S) is a refinement of T and satisfies
?
seperation(T ) = seperation (T ). This follows from Lemma 3 along with the fact that (1)
(i)
D(Ti , Tj ) ? seperation(T ) for any 2 ? i < j ? k, (2) D(T1 , Tj ) ? seperation(T ) for
(1)
(2)
any i ? {1, 2} and 2 ? j ? k, (3) D(T1 , T1 ) ? seperation({S1 , S2 }) = v2 (S) ?
vk (S) = seperation(T ).
Now, pick two clusters of T ? that are either both contained in the same cluster of S or
both ?do not respect? S. Clearly this can always be done. Merge these clusters together to
get an element T ?? ? Ck (S). By Lemma 3 merging clusters cannot decrease the margin.
Therefore, seperation(T ?? ) = seperation(T ? ) = seperation(T ). However, T ?? has fewer
clusters that do not respect S hand T has, and hence we have a contradiction.
This lemma implies that Queyranne?s algorithm, along with a simple dynamic program3
ming algorithm can be used to find the best k clustering with time complexity O(k |S| ).
2
Observe that in fact this problem can be solved in time O(|S| ) ([8]). Even though using
Queyranne?s algorithm is not the fastest algorithm for this problem, the fact that it optimizes
this criterion implies that it can be used to optimize conic combinations of submodular criteria and the single-linkage criterion.
3.3 Generating robust clusterings
One possible issue with the metric we defined is that it is very sensitive to outliers and
noise. To see this, note that if we have two very well separated clusters, then adding a
few points ?between? the clusters could dramatically decrease the separation. To increase
the robustness of the algorithm, we can try to maximize the n smallest distances instead
of maximizing just the smallest distance between clusters. If we give the nth smallest
distance more importance than the smallest distance, this increases the noise tolerance by
ignoring the effects of a few outliers. We will take n ? N to be some fixed positive
integer specified by the user. This will represent the desired degree of noise tolerance
(larger gives more noise tolerance). Let Rn be the set of decreasing n-tuples of elements
in R ? {?}. Given disjoint sets R, T ? S, let D(R, T ) be the element of Rn obtained
as follows. Let L(R, T ) = hd1 , d2 , . . . , d|R|?|T | i be an ordered list of distances between
elements of R and T arranged in decreasing order. So for example, if R = {1, 2} and
T = {3, 4}, with d(r, t) = r ? t, then L(R, T ) = h8, 6, 4, 3i. We define D(R, T ) as
follows. If |R| ? |T | ? n, then D(R, T ) is the last (and thus least) n elements of L(R, T ).
Otherwise, if |R| ? |T | < n, then the first n ? |R|? |T | elements of D(R, T ) are ?, while the
remaining elements are the elements of L(R, T ). So for example, if n = 2, then D(R, T )
in the above example would be h4, 3i, if n = 3 then D(R, T ) = h6, 4, 3i and if n = 6, then
D(R, T ) = h?, ?, 8, 6, 4, 3i.
We define an operation ? on Rn as follows. To get hl1 , l2 , . . . , ln i ? hr1 , r2 , . . . , rn i, order
the elements of hl1 , l2 , . . . , ln , r1 , r2 , . . . , rn i in decreasing order, and let hs1 , s2 , . . . , sn i
be the last n elements. For example, h?, 3, 2i ? h?, 6, 5i = h5, 3, 2i and h4, 3, 1i ?
h5, 4, 3i = h3, 3, 1i.
So, the ? operation picks off the n smallest elements. It is
clear that this operation is commutative (symmetric), associative and that h?, ?, . . . , ?i
acts as an identity. Therefore, Rn forms a commutative semigroup. In fact, we can describe D(R, T ) as follows. For any pair L
of distinct elements r, t ? S, let d? (r, t) =
?
h?, ?, . . . , d(r, t)i. Then D(R, T ) =
r?R,t?T d (r, t). Notice the similarity to
D(R, T ) = minr?R,t?T d(r, t). In fact, if we take n = 1, then the ? operation reduces to
the minimum operation and we get back our original definitions. We can order Rn lexicographically. Therefore, Rn becomes an ordered semigroup. It is entirely straightforward
to check that if R ? T , then D(U, T ) ? D(U, R), and that if A, B, W are disjoint sets with
D(A, W ) ? D(B, W ), then D(A, W ? B) ? D(B, A ? W ). It is also straightforward
to extend Rizzi?s proof to see that Queyranne?s algorithm (with the obvious modifications)
will generate a 2-clustering that minimizes this metric. It can also be verified that the results
of Section 3.2 can be extended to this framework (also with the obvious modifications).
In our experiments, we observed that selecting the parameter n is quite tricky. Now,
Queyranne?s algorithm actually produces a (Gomory-Hu) tree [1] whose edges represent
the cost of separating elements. In practice we noticed that restricting our search to only
edges whose deletion results in clusters of at least certain sizes produces very good results.
Other heuristics such as running the algorithm a number of times to eliminate outliers are
also reasonable approaches. Modifying the algorithm to yield good results while retaining
the theoretical guarantees is an open question.
4 MDL Clustering
We assume that S is a collection of random variables for which we have a (generative)
probability model. Since we have the joint probabilities of all subsets of the random variables, the entropy of any collection of the variables is well defined. The expected coding
(or description) length of any collection T of random variables using an optimal coding
scheme (or a random coding scheme) is known to be H(T ). The partition {S1 , S2 } of S
that minimizes the coding length is therefore arg min{S1 ,S2 }?C2 (S) H(S1 ) + H(S2 ). Now,
arg min
H(S1 ) + H(S2 ) =
{S1 ,S2 }?C2 (S)
arg min
H(S1 ) + H(S2 ) ? H(S)
{S1 ,S2 }?C2 (S)
=
arg min
I(S1 ; S2 )
{S1 ,S2 }?C2 (S)
where I(S1 ; S2 ) is the mutual information between S1 and S2 because S1 ? S2 = S for
all {S1 , S2 } ? C2 (S), Therefore, the problem of partitioning S into two parts to minimize the description length is equivalent to partitioning S into two parts to minimize the
mutual information between the parts. It is shown in [9] that the function f : 2S ? R
defined by f (T ) = I(T ; S \ T ) is symmetric and submodular. Clearly the minima of this
function correspond to partitions that minimize the mutual information between the parts.
Therefore, the problem of partitioning in order to minimize the mutual information between
the parts can be reduced to a symmetric submodular minimization problem, which can be
solved using Queyranne?s algorithm in time O(|S|3 ) assuming oracle queries to a mutual
information oracle. While implementing such a mutual information oracle is not trivial, for
many realistic applications (including one we consider in this paper), the cost of computing
a mutual information query is bounded above by the size of the data set, and so the entire
algorithm is polynomial in the size of the data set. Symmetric submodular functions generalize notions like graph-cuts, and indeed, Queyranne?s algorithm generalizes an algorithm
for computing graph-cuts. Since graph-cut based techniques are extensively used in many
engineering applications, it might be possible to develop criteria that are more appropriate
for these specific applications, while still retaining producing optimal partitions of size 2.
It should be noted that, in general, we cannot use the dynamic programming algorithm
to produce optimal clusterings with k > 2 clusters for the MDL criterion (or for general
symmetric submodular functions). The key reason is that we cannot prove the equivalent
of Lemma 6 for the MDL criterion. However, such an algorithm seems reasonable, and
it does produce reasonable results. Another approach (which is computationally cheaper)
is to compute k clusters by deleting k ? 1 edges of the Gomory-Hu tree produced by
Queyranne?s algorithm. It can be shown [9] that this will yield a factor 2 approximation
to the optimal k-clustering. More generally, if we have an arbitrary increasing submodular function (such as entropy) f : 2S ? R, and we seek a clustering {S1 , S2 , . . . , Sk }
Pk
to minimize the sum i=1 f (Si ), then we have an exact algorithm for 2-clusterings and
a factor 2 approximation guarantee. Therefore, this generalizes approximation guarantees
for graph k-cuts because for any graph G = (V, E), the function f : 2V ? R where
f (A) is the number of edges adjacent to the vertex set A is a submodular function. The
Pk
finding a clustering to minimize i=1 f (Si ) is equivalent to finding a partition of the vertex set of size k to minimize the number of edges disconnected (i.e., to the graph k-cut
problem). Another criterion which we can define similarly can be applied to clustering
genomic sequences. Intuitively, two genomes are more closely related if they share more
common subsequences. Therefore, a natural clustering criterion for sequences is to partition the sequences into clusters so that the sequences from different clusters share as few
subsequences as possible. This problem too can be solved using this generic framework.
5 Results
Table 1 compares Q-Clustering with various other algorithms. The left part of the table
shows the error rates (in percentages) of the (robust) single-linkage criterion and some
other techniques on the same data set as is reported in [3]. The data sets are images (of
digits and faces), and the distance function we used was the Euclidean distance between
the vector of the pixels in the images. The right part of the table compares the Q-Clustering
using MDL criterion with other state of the art algorithms for haplotype tagging of SNPs
(single nucleotide polymorphisms) in the ACE gene on the data set reported in [4]. In this
problem, the goal is to identify a set of SNPs that can accurately predict at least 90% of
the SNPs in ACE gene. Typically the SNPs are highly correlated, and so it is necessary to
cluster SNPs to identify the correlated SNPs. Note it is very important to identify as few
SNPs as possible because the number of clinical trials required grows exponentially with
the number of SNPs. As can be seen Q-Clustering does very well on this data set.
6 Conclusions
The maximum-separation (single-linkage) metric is a very natural ?discriminative? criterion, and it has several advantages, including insensitivity to any monotone transformation
of the distances. However, it is quite sensitive to outliers. The robust version does help
Robust Max-Separation (Single-Linkage)
Q-Clustering
Max-Margin?
Spectral Clust.?
K-means?
Error rate
on Digits
1.4
3
6
7
Error rate
on Faces
0
0
16.7
24.4
MDL
Q-Clustering
EigenSNP?
Sliding Window?
htStep (up)?
htStep (down)?
#SNPs required
3
5
15
7
7
Table 1: Comparing (robust) max-separation and MDL Q-Clustering with other techniques.
Results marked by ? and ? are from [3] and [4] respectively.
a little, but it does require some additional knowledge (about the approximate number of
outliers) and considerable tuning. It is possible that we could develop additional heuristics
to automatically determine the parameters of the robust version. The MDL criterion is also
a very natural one, and the results on haplotype tagging are quite promising. The MDL criterion can be seen as a generalization of graph cuts, and so it seems like Q-clustering can
also be applied to optimize other criteria arising in problems like image segmentation, especially when there is a generative model. Another natural criterion for clustering strings is
to partition the strings/sequences to minimize the number of common subsequences. This
could have interesting applications in genomics. The key novelty of this paper is the guarantees of optimality produced by the algorithm, and the generaly framework into which a
number of natural criterion fall.
7 Acknowledgments
The authors acknowledge the assistance of Linli Xu in obtaining the data to test the algorithm and for providing the code used in [3]. Gilles Blanchard pointed out that the MST
algorithm finds the optimal solution for the single-linkage criterion. The first and third
authors were supported by NSF grant IIS-0093430 and an Intel Corporation Grant.
References
[1] M. Queyranne. ?Minimizing symmetric submodular functions?, Math. Programming, 82, pages
3?12. 1998.
[2] R. Rizzi, ?On Minimizing symmetric set functions?, Combinatorica 20(3), pages 445?450,
2000.
[3] L. Xu, J. Neufeld, B. Larson and D. Schuurmans. ?Maximum Margin Clustering?, in Advances
in Neural Information Processing Systems 17, pages 1537-1544, 2005.
[4] Z. Lin and R. B. Altman. ?Finding Haplotype Tagging SNPs by Use of Principal Components
Analysis?, Am. J. Hum. Genet. 75, pages 850-861, 2004.
[5] Jain, A.K. and R.C. Dubes, ?Algorithms for Clustering Data.? Englewood Cliffs, N.J.: Prentice
Hall, 1988.
[6] P. Brucker, ?On the complexity of clustering problems,? in R. Henn, B. Korte, and W. Oletti
(eds.), Optimization and Operations Research, Lecture Notes in Economics and Mathematical
Systems, Springer, Berlin 157.
[7] P. Kontkanen, P. Myllym?aki, W. Buntine, J. Rissanen and H. Tirri. ?An MDL framework for
data clustering?, HIIT Technical Report 2004.
[8] M. Delattre and P. Hansen. ?Bicriterion Cluster Analysis?, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol-2, No. 4, 1980
[9] M. Narasimhan, N. Jojic and J. Bilmes. ?Q-Clustering?, Technical Report, Dept. of Electrical
Engg., University of Washington, UWEETR-2006-0001, 2005
| 2760 |@word trial:1 version:3 polynomial:6 seems:2 open:1 d2:1 hu:2 seek:3 pick:3 reduction:1 selecting:1 past:1 o2:1 com:1 comparing:2 surprising:1 si:8 must:1 mst:1 realistic:1 partition:24 engg:1 nebojsa:1 generative:6 fewer:1 intelligence:1 item:1 provides:1 math:1 mathematical:1 along:2 c2:8 h4:2 tirri:1 prove:2 combine:1 tagging:3 indeed:1 expected:2 roughly:1 brucker:1 ol:1 ming:1 decreasing:3 automatically:1 little:1 window:1 increasing:1 becomes:1 begin:1 notation:1 bounded:1 maximizes:1 cm:2 minimizes:7 string:2 narasimhan:2 finding:9 transformation:2 corporation:2 guarantee:5 every:2 ti:3 act:1 um:2 scaled:1 tricky:1 partitioning:5 grant:2 producing:1 t1:10 positive:1 engineering:2 encoding:2 cliff:1 establishing:1 merge:3 might:1 fastest:1 range:1 acknowledgment:1 union:5 practice:1 digit:2 procedure:2 intersect:1 thought:3 word:1 get:4 cannot:3 prentice:1 h8:1 optimize:5 equivalent:5 maximizing:3 straightforward:2 economics:1 splitting:1 assigns:1 immediately:1 contradiction:1 notion:3 altman:1 suppose:4 user:3 exact:2 programming:3 element:26 recognition:1 jk:2 satisfying:1 rizzi:4 cut:7 observed:1 electrical:2 solved:4 ordering:1 decrease:2 complexity:2 dynamic:3 solving:1 completely:1 joint:1 various:1 hl1:2 separated:1 distinct:1 jain:1 describe:1 query:2 exhaustive:1 quite:5 whose:2 widely:1 larger:2 heuristic:2 say:4 ace:2 otherwise:1 compressed:1 think:1 associative:1 advantage:2 sequence:5 neufeld:1 flexibility:1 insensitivity:1 description:8 seattle:1 cluster:60 extending:1 r1:1 produce:4 generating:3 object:2 tk:4 help:1 develop:2 dubes:1 h3:1 implies:2 submodularity:1 closely:2 modifying:1 seperation:26 implementing:1 require:1 suffices:1 generalization:2 polymorphism:1 extension:2 considered:2 hall:1 predict:1 claim:1 smallest:7 applicable:1 hansen:1 hs1:1 sensitive:3 create:1 minimization:1 clearly:2 genomic:1 always:3 ck:13 corollary:2 derived:1 refining:2 vk:5 rank:1 check:1 am:1 dependent:1 minimizers:1 squaring:1 vl:2 typically:2 eliminate:1 entire:1 subroutine:1 interested:1 pixel:1 arg:8 issue:1 retaining:2 art:1 mutual:7 washington:3 represents:1 broad:2 unsupervised:1 minimized:2 np:1 t2:3 report:2 few:5 individual:1 cheaper:1 microsoft:3 attempt:1 englewood:1 highly:2 mdl:13 tj:3 edge:5 necessary:1 nucleotide:1 tree:3 euclidean:1 desired:1 theoretical:2 assertion:1 lattice:1 minr:2 cost:3 vertex:2 subset:8 too:1 buntine:1 reported:2 probabilistic:2 off:3 together:2 potential:2 coding:4 blanchard:1 ranking:1 depends:1 later:2 try:5 lot:1 closed:1 start:2 contribution:1 minimize:9 characteristic:1 maximized:1 yield:2 correspond:1 identify:3 generalize:2 accurately:1 produced:2 clust:1 bilmes:3 classified:1 delattre:1 whenever:1 ed:1 definition:1 thereof:1 obvious:4 elegance:1 proof:4 stop:1 knowledge:2 subsection:2 dimensionality:1 organized:1 segmentation:1 actually:1 back:1 ok:5 arranged:1 done:2 though:2 hiit:1 just:1 hand:2 somehow:1 grows:1 effect:1 hence:8 equality:1 jojic:3 symmetric:16 semigroup:2 adjacent:1 assistance:1 aki:1 noted:1 larson:1 criterion:58 complete:1 snp:10 image:3 wise:1 common:5 haplotype:3 insensitive:1 exponentially:1 extend:3 expressing:1 tuning:1 i6:1 similarly:1 pointed:1 submodular:15 similarity:1 etc:2 closest:1 showed:1 optimizes:1 certain:1 seen:2 minimum:7 additional:2 care:1 determine:1 maximize:8 novelty:1 ii:1 sliding:1 desirable:1 reduces:1 kontkanen:1 technical:2 lexicographically:1 clinical:1 long:1 lin:1 dept:2 variant:1 metric:4 kernel:1 represent:2 addition:2 background:1 separately:1 rest:1 sure:1 spirit:1 seem:1 call:1 integer:1 ee:1 enough:1 variety:1 reduce:2 genet:1 translates:1 hr1:1 linkage:13 queyranne:17 speaking:1 linli:1 dramatically:1 generally:3 clear:3 korte:1 extensively:1 reduced:1 generate:1 percentage:1 nsf:1 s3:2 notice:1 disjoint:6 arising:1 broadly:1 vol:1 key:3 rissanen:1 verified:1 graph:8 monotone:7 sum:2 you:1 reasonable:3 separation:12 bicriterion:1 bit:1 entirely:1 guaranteed:1 oracle:4 u1:2 argument:2 optimality:2 min:22 combination:2 disconnected:1 smaller:1 modification:2 s1:60 outlier:7 intuitively:2 ln:2 computationally:1 describing:2 tractable:1 generalizes:2 operation:6 h6:1 apply:1 observe:3 v2:1 appropriate:5 spectral:2 generic:1 robustness:2 original:1 clustering:73 remaining:2 running:1 especially:1 objective:3 noticed:1 question:2 hum:1 distance:29 separating:1 berlin:1 trivial:4 spanning:1 reason:1 assuming:1 length:9 besides:1 code:1 relationship:2 providing:1 minimizing:6 gilles:1 sm:2 finite:2 acknowledge:1 immediate:1 defining:1 extended:1 rn:8 arbitrary:1 pair:3 required:3 specified:4 deletion:1 able:1 redmond:1 pattern:2 max:5 including:2 deleting:1 natural:7 nth:1 scheme:2 imply:1 conic:1 sn:1 genomics:1 l2:2 multiplication:1 expect:1 lecture:1 interesting:1 degree:1 consistent:5 principle:2 share:3 supported:1 last:2 allow:1 fall:1 face:2 tolerance:3 genome:1 author:2 collection:4 refinement:8 universally:1 transaction:1 sj:5 approximate:3 gene:2 tuples:1 discriminative:3 subsequence:3 search:2 sk:13 chief:1 table:4 promising:1 nature:2 robust:6 inherently:2 ignoring:1 obtaining:1 schuurmans:1 cl:3 constructing:2 pk:2 s2:52 noise:4 myllym:1 nothing:1 intracluster:1 positively:1 xu:2 intel:1 tl:4 wlog:1 guise:1 candidate:1 third:1 down:1 specific:3 list:1 r2:2 svm:1 mukund:1 grouping:2 dl:2 consist:1 quantization:1 restricting:1 merging:2 adding:1 importance:1 commutative:2 margin:9 entropy:3 gomory:2 likely:1 henn:1 ordered:4 contained:1 u2:2 springer:1 satisfies:1 intercluster:1 identity:1 goal:1 marked:1 jeff:1 considerable:1 hyperplane:1 lemma:11 principal:1 total:1 hd1:1 accepted:1 experimental:1 formally:2 combinatorica:1 dissimilar:1 h5:2 correlated:2 |
1,939 | 2,761 | A Bayesian Framework for
Tilt Perception and Confidence
Odelia Schwartz
HHMI and Salk Institute
La Jolla, CA 92014
[email protected]
Terrence J. Sejnowski
HHMI and Salk Institute
La Jolla, CA 92014
[email protected]
Peter Dayan
Gatsby, UCL
17 Queen Square, London
[email protected]
Abstract
The misjudgement of tilt in images lies at the heart of entertaining visual illusions and rigorous perceptual psychophysics. A wealth of findings has attracted many mechanistic models, but few clear computational
principles. We adopt a Bayesian approach to perceptual tilt estimation,
showing how a smoothness prior offers a powerful way of addressing
much confusing data. In particular, we faithfully model recent results
showing that confidence in estimation can be systematically affected by
the same aspects of images that affect bias. Confidence is central to
Bayesian modeling approaches, and is applicable in many other perceptual domains.
Perceptual anomalies and illusions, such as the misjudgements of motion and tilt evident
in so many psychophysical experiments, have intrigued researchers for decades.1?3 A
Bayesian view4?8 has been particularly influential in models of motion processing, treating
such anomalies as the normative product of prior information (often statistically codifying Gestalt laws) with likelihood information from the actual scenes presented. Here, we
expand the range of statistically normative accounts to tilt estimation, for which there are
classes of results (on estimation confidence) that are so far not available for motion.
The tilt illusion arises when the perceived tilt of a center target is misjudged (ie bias) in
the presence of flankers. Another phenomenon, called Crowding, refers to a loss in the
confidence (ie sensitivity) of perceived target tilt in the presence of flankers. Attempts
have been made to formalize these phenomena quantitatively. Crowding has been modeled
as compulsory feature pooling (ie averaging of orientations), ignoring spatial positions.9, 10
The tilt illusion has been explained by lateral interactions11, 12 in populations of orientationtuned units; and by calibration.13
However, most models of this form cannot explain a number of crucial aspects of the data.
First, the geometry of the positional arrangement of the stimuli affects attraction versus
repulsion in bias, as emphasized by Kapadia et al14 (figure 1A), and others.15, 16 Second,
Solomon et al. recently measured bias and sensitivity simultaneously.11 The rich and
surprising range of sensitivities, far from flat as a function of flanker angles (figure 1B),
are outside the reach of standard models. Moreover, current explanations do not offer a
computational account of tilt perception as the outcome of a normative inference process.
Here, we demonstrate that a Bayesian framework for orientation estimation, with a prior
favoring smoothness, can naturally explain a range of seemingly puzzling tilt data. We
explicitly consider both the geometry of the stimuli, and the issue of confidence in the esti-
6
5
4
3
2
1
0
-1
-2
(B)
Attraction
Repulsion
Sensititvity (1/deg)
Bias (deg)
(A)
0.6
0.5
0.4
0.3
0.2
0.1
-80 -60 -40 -20 0 20 40 60 80
Flanker tilt (deg)
Figure 1: Tilt biases and sensitivities in visual perception. (A) Kapadia et al demonstrated the
importance of geometry on tilt bias, with bar stimuli in the fovea (and similar results in the
periphery). When 5 degrees clockwise flankers are arranged colinearly, the center target appears
attracted in the direction of the flankers; when flankers are lateral, the target appears repulsed.
Data are an average of 5 subjects.14 (B) Solomon et al measured both biases and sensitivities
for gratings in the visual periphery.11 On the top are example stimuli, with flankers tilted 22.5
degrees clockwise. This constitutes the classic tilt illusion, with a repulsive bias percept. In
addition, sensitivities vary as a function of flanker angles, in a systematic way (even in cases
when there are no biases at all). Sensitivities are given in units of the inverse of standard deviation
of the tilt estimate. More detailed data for both experiments are shown in the results section.
mation. Bayesian analyses have most frequently been applied to bias. Much less attention
has been paid to the equally important phenomenon of sensitivity. This aspect of our model
should be applicable to other perceptual domains.
In section 1 we formulate the Bayesian model. The prior is determined by the principle of
creating a smooth contour between the target and flankers. We describe how to extract the
bias and sensitivity. In section 2 we show experimental data of Kapadia et al and Solomon
et al, alongside the model simulations, and demonstrate that the model can account for both
geometry, and bias and sensitivity measurements in the data. Our results suggest a more
unified, rational, approach to understanding tilt perception.
1
Bayesian model
Under our Bayesian model, inference is controlled by the posterior distribution over the
tilt of the target element. This comes from the combination of a prior favoring smooth
configurations of the flankers and target, and the likelihood associated with the actual scene.
A complete distribution would consider all possible angles and relative spatial positions of
the bars, and marginalize the posterior over all but the tilt of the central element. For
simplicity, we make two benign approximations: conditionalizing over (ie clamping) the
angles of the flankers, and exploring only a small neighborhood of their positions. We now
describe the steps of inference.
Smoothness prior: Under these approximations, we consider a given actual configuration
(see fig 2A) of flankers f1 = (?1 , x1 ), f2 = (?2 , x2 ) and center target c = (?c , xc ), arranged
from top to bottom. We have to generate a prior over ?c and ?1 = x1 ? xc and ?2 = x2 ? xc
based on the principle of smoothness. As a less benign approximation, we do this in two
stages: articulating a principle that determines a single optimal configuration; and generating a prior as a mixture of a Gaussian about this optimum and a uniform distribution, with
the mixing proportion of the latter being determined by the smoothness of the optimum.
Smoothness has been extensively studied in the computer vision literature.17?20 One widely
(B)
(C)
f1
f1
?1
R
Probability max smooth Max smooth target (deg)
(A)
40
20
0
-20
c
?1
c
-40
?c
f2
f2
1
0.8
0.6
0.4
0.2
0
-80
-60
-40
-20
0
20 40
Flanker tilt (deg)
60
80
-80
-60
-40
20
0
20 40
Flanker tilt (deg)
60
80
Figure 2: Geometry and smoothness for flankers, f1 and f2 , and center target, c. (A) Example
actual configuration of flankers and target, aligned along the y axis from top to bottom. (B)
The elastica procedure can rotate the target angle (to ?c ) and shift the relative flanker and target
positions on the x axis (to ?1 and ?2 ) in its search for the maximally smooth solution. Small
spatial shifts (up to 1/15 the size of R) of positions are allowed, but positional shift is overemphasized in the figure for visibility. (C) Top: center tilt that results in maximal smoothness, as
a function of flanker tilt. Boxed cartoons show examples for given flanker tilts, of the optimally
smooth configuration. Note attraction of target towards flankers for small flanker angles; here
flankers and target are positioned in a nearly colinear arrangement. Note also repulsion of target
away from flankers for intermediate flanker angles. Bottom: P [c, f1 , f2 ] for center tilt that yields
maximal smoothness. The y axis is normalized between 0 and 1.
used principle, elastica, known even to Euler, has been applied to contour completion21
and other computer vision applications.17 The basic idea is to find the curve with minimum
energy (ie, square of curvature). Sharon et al19 showed that the elastica function can be
well approximated by a number of simpler forms. We adopt a version that Leung and
Malik18 adopted from Sharon et al.19 We assume that the probability for completing a
smooth curve, can be factorized into two terms:
P [c, f1 , f2 ] = G(c, f1 )G(c, f2 )
(1)
with the term G(c, f1 ) (and similarly, G(c, f2 )) written as:
R
D?
D? = ?12 + ?c2 ? ?1 ?c
(2)
?
)
where
?R
??
and ?1 (and similarly, ?c ) is the angle between the orientation at f1 , and the line joining
f1 and c. The distance between the centers of f1 and c is given by R. The two constants,
?? and ?R , control the relative contribution to smoothness of the angle versus the spatial
distance. Here, we set ?? = 1, and ?R = 1.5. Figure 2B illustrates an example geometry,
in which ?c , ?1 , and ?2 , have been shifted from the actual scene (of figure 2A).
G(c, f1 ) = exp(?
We now estimate the smoothest solution for given configurations. Figure 2C shows for
given flanker tilts, the center tilt that yields maximal smoothness, and the corresponding
probability of smoothness. For near vertical flankers, the spatial lability leads to very weak
attraction and high probability of smoothness. As the flanker angle deviates farther from
vertical, there is a large repulsion, but also lower probability of smoothness. These observations are key to our model: the maximally smooth center tilt will influence attractive and
repulsive interactions of tilt estimation; the probability of smoothness will influence the
relative weighting of the prior versus the likelihood.
From the smoothness principle, we construct a two dimensional prior (figure 3A). One
dimension represents tilt, the other dimension, the overall positional shift between target
(B) Likelihood
(D) Marginalized Posterior
(C) Posterior
20
0.03
10
-10
-20
0
Probability
0
10
Angle
Angle
Angle
10
0
-10
-10
-20
-20
0.02
0.01
0
-0. 2
0
Position
0.2
(E) Psychometric function
20
-0. 2
0
0.2
-0. 2
0
-10 -5
0.2
Angle
Position
Position
0
5
10
Probability clockwise
(A) Prior
20
1
0.8
0.6
0.4
0.2
0
-20
-10
0
10
20
Target angle (deg)
Counter-clockwise
Clockwise
Figure 3: Bayes model for example flankers and target. (A) Prior 2D distribution for flankers
set at 22.5 degrees (note repulsive preference for -5.5 degrees). (B) Likelihood 2D distribution
for a target tilt of 3 degrees; (C) Posterior 2D distribution. All 2D distributions are drawn on
the same grayscale range, and the presence of a larger baseline in the prior causes it to appear
more dimmed. (D) Marginalized posterior, resulting in 1D distribution over tilt. Dashed line
represents the mean, with slight preference for negative angle. (E) For this target tilt, we calculate
probability clockwise, and obtain one point on psychometric curve.
and flankers (called ?position?). The prior is a 2D Gaussian distribution, sat upon a constant
baseline.22 The Gaussian is centered at the estimated smoothest target angle and relative
position, and the baseline is determined by the probability of smoothness. The baseline, and
its dependence on the flanker orientation, is a key difference from Weiss et al?s Gaussian
prior for smooth, slow motion. It can be seen as a mechanism to allow segmentation (see
Posterior description below). The standard deviation of the Gaussian is a free parameter.
Likelihood: The likelihood over tilt and position (figure 3B) is determined by a 2D Gaussian distribution with an added baseline.22 The Gaussian is centered at the actual target
tilt; and at a position taken as zero, since this is the actual position, to which the prior is
compared. The standard deviation and baseline constant are free parameters.
Posterior and marginalization: The posterior comes from multiplying likelihood and
prior (figure 3C) and then marginalizing over position to obtain a 1D distribution over tilt.
Figure 3D shows an example in which this distribution is bimodal. Other likelihoods, with
closer agreement between target and smooth prior, give unimodal distributions. Note that
the bimodality is a direct consequence of having an added baseline to the prior and likelihood (if these were Gaussian without a baseline, the posterior would always be Gaussian).
The viewer is effectively assessing whether the target is associated with the same object as
the flankers, and this is reflected in the baseline, and consequently, in the bimodality, and
confidence estimate. We define ? as the mean angle of the 1D posterior distribution (eg,
value of dashed line on the x axis), and ? as the height of the probability distribution at
that mean angle (eg, height of dashed line). The term ? is an indication of confidence in
the angle estimate, where for larger values we are more certain of the estimate.
Decision of probability clockwise: The probability of a clockwise tilt is estimated from
the marginalized posterior:
1
P =
1 + exp
??.?k
? log(?+?)
(3)
where ? and ? are defined as above, k is a free parameter and ? a small constant. Free
parameters are set to a single constant value for all flanker and center configurations. Weiss
et al use a similar compressive nonlinearity, but without the term ?. We also tried a decision
function that integrates the posterior, but the resulting curves were far from the sigmoidal
nature of the data.
Bias and sensitivity: For one target tilt, we generate a single probability and therefore a
single point on the psychometric function relating tilt to the probability of choosing clockwise. We generate the full psychometric curve from all target tilts and fit to it a cumulative
60
40
20
-5
0
5
Target tilt (deg)
10
80
60
40
20
0
-10
(C)
Data
-5
0
5
Target tilt (deg)
10
80
60
40
20
0
-10
(D)
Model
100
100
100
80
0
-10
Model
Frequency responding clockwise
(B)
Data
Frequency responding clockwise
Frequency responding clockwise
Frequency responding clockwise
(A)
100
-5
0
5
Target tilt (deg)
10
80
60
40
20
0
-10
-5
0
5
10
Target tilt (deg)
Figure 4: Kapadia et al data,14 versus Bayesian model. Solid lines are fits to a cumulative
Gaussian distribution. (A) Flankers are tilted 5 degrees clockwise (black curve) or anti-clockwise
(gray) of vertical, and positioned spatially in a colinear arrangement. The center bar appears
tilted in the direction of the flankers (attraction), as can be seen by the attractive shift of the
psychometric curve. The boxed stimuli cartoon illustrates a vertical target amidst the flankers.
(B) Model for colinear bars also produces attraction. (C) Data and (D) model for lateral flankers
results in repulsion. All data are collected in the fovea for bars.
Gaussian distribution N (?, ?) (figure 3E). The mean ? of the fit corresponds to the bias,
and ?1 to the sensitivity, or confidence in the bias. The fit to a cumulative Gaussian and
extraction of these parameters exactly mimic psychophysical procedures.11
2
Results: data versus model
We first consider the geometry of the center and flanker configurations, modeling the full
psychometric curve for colinear and parallel flanks (recall that figure 1A showed summary
biases). Figure 4A;B demonstrates attraction in the data and model; that is, the psychometric curve is shifted towards the flanker, because of the nature of smooth completions for
colinear flankers. Figure 4C;D shows repulsion in the data and model. In this case, the
flankers are arranged laterally instead of colinearly. The smoothest solution in the model
arises by shifting the target estimate away from the flankers. This shift is rather minor,
because the configuration has a low probability of smoothness (similar to figure 2C), and
thus the prior exerts only a weak effect.
The above results show examples of changes in the psychometric curve, but do not address
both bias and, particularly, sensitivity, across a whole range of flanker configurations. Figure 5 depicts biases and sensitivity from Solomon et al, versus the Bayes model. The data
are shown for a representative subject, but the qualitative behavior is consistent across all
subjects tested. In figure 5A, bias is shown, for the condition that both flankers are tilted
at the same angle. The data exhibit small attraction at near vertical flanker angles (this
arrangement is close to colinear); large repulsion at intermediate flanker angles of 22.5 and
45 degrees from vertical; and minimal repulsion at large angles from vertical. This behavior is also exhibited in the Bayes model (Figure 5B). For intermediate flanker angles, the
smoothest solution in the model is repulsive, and the effect of the prior is strong enough to
induce a significant repulsion. For large angles, the prior exerts almost no effect.
Interestingly, sensitivity is far from flat in both data and model. In the data (Figure 5C),
there is most loss in sensitivity at intermediate flanker angles of 22.5 and 45 degrees (ie,
the subject is less certain); and sensitivity is higher for near vertical or near horizontal
flankers. The model shows the same qualitative behavior (Figure 5D). In the model, there
are two factors driving sensitivity: one is the probability of completing a smooth curvature
for a given flanker configuration, as in Figure 2B; this determines the strength of the prior.
The other factor is certainty in a particular center estimation; this is determined by ?, derived from the posterior distribution, and incorporated into the decision stage of the model
Data
5
0
-60
-40
-80
-60
-40
-20
0
20 40
Flanker tilt (deg)
-20
0
20 40
Flanker tilt (deg)
60
60
80
-60
-40
0.6
0.5
0.4
0.3
0.2
0.1
-20
0
20 40
Flanker tilt (deg)
60
80
60
80
-80
-60
-40
-20 0
20 40
Flanker tilt (deg)
-80
-60
-40
-20
0
20 40
Flanker tilt (deg)
60
80
-20 0
20 40
Flanker tilt (deg)
60
80
(F)
Bias (deg)
10
5
0
-5
0.6
0.5
0.4
0.3
0.2
0.1
-80
(D)
10
-10
-10
80
Sensitivity (1/deg)
-80
5
0
-5
-80
-80
-60
-60
-40
-40
-20
0
20 40
Flanker tilt (deg)
-20
0
20 40
Flanker tilt (deg)
60
-10
80
(H)
60
80
Sensitivity (1/deg)
Sensititvity (1/deg)
Bias (deg)
0.6
0.5
0.4
0.3
0.2
0.1
(G)
Sensititvity (1/deg)
0
-5
(C)
(E)
5
-5
-10
Model
(B) 10
Bias (deg)
Bias (deg)
(A) 10
0.6
0.5
0.4
0.3
0.2
0.1
-80
-60
-40
Figure 5: Solomon et al data11 (subject FF), versus Bayesian model. (A) Data and (B) model
biases with same-tilted flankers; (C) Data and (D) model sensitivities with same-tilted flankers;
(E;G) data and (F;H) model as above, but for opposite-tilted flankers (note that opposite-tilted
data was collected for less flanker angles). Each point in the figure is derived by fitting a cummulative Gaussian distribution N (?, ?) to corresponding psychometric curve, and setting bias
equal to ? and sensitivity to ?1 . In all experiments, flanker and target gratings are presented in
the visual periphery. Both data and model stimuli are averages of two configurations, on the left
hand side (9 O?clock position) and right hand side (3 O?clock position). The configurations are
similar to Figure 1 (B), but slightly shifted according to an iso-eccentric circle, so that all stimuli
are similarly visible in the periphery.
(equation 3). For flankers that are far from vertical, the prior has minimal effect because
one cannot find a smooth solution (eg, the likelihood dominates), and thus sensitivity is
higher. The low sensitivity at intermediate angles arises because the prior has considerable
effect; and there is conflict between the prior (tilt, position), and likelihood (tilt, position).
This leads to uncertainty in the target angle estimation . For flankers near vertical, the prior
exerts a strong effect; but there is less conflict between the likelihood and prior estimates
(tilt, position) for a vertical target. This leads to more confidence in the posterior estimate,
and therefore, higher sensitivity. The only aspect that our model does not reproduce is the
(more subtle) sensitivity difference between 0 and +/- 5 degree flankers.
Figure 5E-H depict data and model for opposite tilted flankers. The bias is now close to zero
in the data (Figure 5E) and model (Figure 5F), as would be expected (since the maximally
smooth angle is now always roughly vertical). Perhaps more surprisingly, the sensitivities
continue to to be non-flat in the data (Figure 5G) and model (Figure 5H). This behavior
arises in the model due to the strength of prior, and positional uncertainty. As before, there
is most loss in sensitivity at intermediate angles.
Note that to fit Kapadia et al, simulations used a constant parameter of k = 9 in equation
3, whereas for the Solomon et al. simulations, k = 2.5. This indicates that, in our model,
there was higher confidence in the foveal experiments than in the peripheral ones.
3
Discussion
We applied a Bayesian framework to the widely studied tilt illusion, and demonstrated the
model on examples from two different data sets involving foveal and peripheral estimation.
Our results support the appealing hypothesis that perceptual misjudgements are not a consequence of poor system design, but rather can be described as optimal inference.4?8 Our
model accounts correctly for both attraction and repulsion, determined by the smoothness
prior and the geometry of the scene.
We emphasized the issue of estimation confidence. The dataset showing how confidence
is affected by the same issues that affect bias,11 was exactly appropriate for a Bayesian
formulation; other models in the literature typically do not incorporate confidence in a
thoroughly probabilistic manner. In fact, our model fits the confidence (and bias) data more
proficiently than an account based on lateral interactions among a population of orientationtuned cells.11 Other Bayesian work, by Stocker et al,6 utilized the full slope of the psychometric curve in fitting a prior and likelihood to motion data, but did not examine the issue
of confidence. Estimation confidence plays a central role in Bayesian formulations as a
whole. Understanding how priors affect confidence should have direct bearing on many
other Bayesian calculations such as multimodal integration.23
Our model is obviously over-simplified in a number of ways. First, we described it in terms
of tilts and spatial positions; a more complete version should work in the pixel/filtering domain.18, 19 We have also only considered two flanking elements; the model is extendible
to a full-field surround, whereby smoothness operates along a range of geometric directions, and some directions are more (smoothly) dominant than others. Second, the prior
is constructed by summarizing the maximal smoothness information; a more probabilistically correct version should capture the full probability of smoothness in its prior. Third,
our model does not incorporate a formal noise representation; however, sensitivities could
be influenced both by stimulus-driven noise and confidence. Fourth, our model does not
address attraction in the so-called indirect tilt illusion, thought to be mediated by a different
mechanism. Finally, we have yet to account for neurophysiological data within this framework, and incorporate constraints at the neural implementation level. However, versions
of our computations are oft suggested for intra-areal and feedback cortical circuits; and
smoothness principles form a key part of the association field connection scheme in Li?s24
dynamical model of contour integration in V1.
Our model is connected to a wealth of literature in computer vision and perception. Notably, occlusion and contour completion might be seen as the extreme example in which
there is no likelihood information at all for the center target; a host of papers have shown
that under these circumstances, smoothness principles such as elastica and variants explain
many aspects of perception. The model is also associated with many studies on contour integration motivated by Gestalt principles;25, 26 and exploration of natural scene statistics and
Gestalt,27, 28 including the relation to contour grouping within a Bayesian framework.29, 30
Indeed, our model could be modified to include a prior from natural scenes.
There are various directions for the experimental test and refinement of our model. Most
pressing is to determine bias and sensitivity for different center and flanker contrasts. As
in the case of motion, our model predicts that when there is more uncertainty in the center
element, prior information is more dominant. Another interesting test would be to design
a task such that the center element is actually part of a different figure and unrelated to the
flankers; our framework predicts that there would be minimal bias, because of segmentation. Our model should also be applied to other tilt-based illusions such as the Fraser spiral
and Z?ollner. Finally, our model can be applied to other perceptual domains;31 and given
the apparent similarities between the tilt illusion and the tilt after-effect, we plan to extend
the model to adaptation, by considering smoothness in time as well as space.
Acknowledgements This work was funded by the HHMI (OS, TJS) and the Gatsby Charitable Foundation (PD). We are very grateful to Serge Belongie, Leanne Chukoskie, Philip
Meier and Joshua Solomon for helpful discussions.
References
[1] J J Gibson. Adaptation, after-effect, and contrast in the perception of tilted lines. Journal of Experimental Psychology,
20:553?569, 1937.
[2] C Blakemore, R H S Carpentar, and M A Georgeson. Lateral inhibition between orientation detectors in the human visual
system. Nature, 228:37?39, 1970.
[3] J A Stuart and H M Burian. A study of separation difficulty: Its relationship to visual acuity in normal and amblyopic eyes.
American Journal of Ophthalmology, 53:471?477, 1962.
[4] A Yuille and H H Bulthoff. Perception as bayesian inference. In Knill and Whitman, editors, Bayesian decision theory and
psychophysics, pages 123?161. Cambridge University Press, 1996.
[5] Y Weiss, E P Simoncelli, and E H Adelson. Motion illusions as optimal percepts. Nature Neuroscience, 5:598?604, 2002.
[6] A Stocker and E P Simoncelli. Constraining a bayesian model of human visual speed perception. Adv in Neural Info
Processing Systems, 17, 2004.
[7] D Kersten, P Mamassian, and A Yuille. Object perception as bayesian inference. Annual Review of Psychology, 55:271?304,
2004.
[8] K Kording and D Wolpert. Bayesian integration in sensorimotor learning. Nature, 427:244?247, 2004.
[9] L Parkes, J Lund, A Angelucci, J Solomon, and M Morgan. Compulsory averaging of crowded orientation signals in human
vision. Nature Neuroscience, 4:739?744, 2001.
[10] D G Pelli, M Palomares, and N J Majaj. Crowding is unlike ordinary masking: Distinguishing feature integration from
detection. Journal of Vision, 4:1136?1169, 2002.
[11] J Solomon, F M Felisberti, and M Morgan. Crowding and the tilt illusion: Toward a unified account. Journal of Vision,
4:500?508, 2004.
[12] J A Bednar and R Miikkulainen. Tilt aftereffects in a self-organizing model of the primary visual cortex. Neural Computation, 12:1721?1740, 2000.
[13] C W Clifford, P Wenderoth, and B Spehar. A functional angle on some after-effects in cortical vision. Proc Biol Sci,
1454:1705?1710, 2000.
[14] M K Kapadia, G Westheimer, and C D Gilbert. Spatial distribution of contextual interactions in primary visual cortex and
in visual perception. J Neurophysiology, 4:2048?262, 2000.
[15] C C Chen and C W Tyler. Lateral modulation of contrast discrimination: Flanker orientation effects. Journal of Vision,
2:520?530, 2002.
[16] I Mareschal, M P Sceniak, and R M Shapley. Contextual influences on orientation discrimination: binding local and global
cues. Vision Research, 41:1915?1930, 2001.
[17] D Mumford. Elastica and computer vision. In Chandrajit Bajaj, editor, Algebraic geometry and its applications. Springer
Verlag, 1994.
[18] T K Leung and J Malik. Contour continuity in region based image segmentation. In Proc. ECCV, pages 544?559, 1998.
[19] E Sharon, A Brandt, and R Basri. Completion energies and scale. IEEE Pat. Anal. Mach. Intell., 22(10), 1997.
[20] S W Zucker, C David, A Dobbins, and L Iverson. The organization of curve detection: coarse tangent fields. Computer
Graphics and Image Processing, 9(3):213?234, 1988.
[21] S Ullman. Filling in the gaps: the shape of subjective contours and a model for their generation. Biological Cybernetics,
25:1?6, 1976.
[22] G E Hinton and A D Brown. Spiking boltzmann machines. Adv in Neural Info Processing Systems, 12, 1998.
[23] R A Jacobs. What determines visual cue reliability? Trends in Cognitive Sciences, 6:345?350, 2002.
[24] Z Li. A saliency map in primary visual cortex. Trends in Cognitive Science, 6:9?16, 2002.
[25] D J Field, A Hayes, and R F Hess. Contour integration by the human visual system: evidence for a local ?association field?.
Vision Research, 33:173?193, 1993.
[26] J Beck, A Rosenfeld, and R Ivry. Line segregation. Spatial Vision, 4:75?101, 1989.
[27] M Sigman, G A Cecchi, C D Gilbert, and M O Magnasco. On a common circle: Natural scenes and gestalt rules. PNAS,
98(4):1935?1940, 2001.
[28] S Mahumad, L R Williams, K K Thornber, and K Xu. Segmentation of multiple salient closed contours from real images.
IEEE Pat. Anal. Mach. Intell., 25(4):433?444, 1997.
[29] W S Geisler, J S Perry, B J Super, and D P Gallogly. Edge co-occurence in natural images predicts contour grouping
performance. Vision Research, 6:711?724, 2001.
[30] J H Elder and R M Goldberg. Ecological statistics of gestalt laws for the perceptual organization of contours. Journal of
Vision, 4:324?353, 2002.
[31] S R Lehky and T J Sejnowski. Neural model of stereoacuity and depth interpolation based on a distributed representation
of stereo disparity. Journal of Neuroscience, 10:2281?2299, 1990.
| 2761 |@word neurophysiology:1 version:4 proportion:1 wenderoth:1 simulation:3 tried:1 jacob:1 paid:1 solid:1 crowding:4 configuration:13 foveal:2 disparity:1 interestingly:1 subjective:1 current:1 contextual:2 surprising:1 yet:1 attracted:2 written:1 tilted:10 visible:1 entertaining:1 benign:2 shape:1 visibility:1 treating:1 depict:1 discrimination:2 cue:2 stereoacuity:1 iso:1 farther:1 parkes:1 coarse:1 preference:2 sigmoidal:1 simpler:1 brandt:1 height:2 along:2 c2:1 direct:2 constructed:1 iverson:1 qualitative:2 fitting:2 shapley:1 manner:1 notably:1 indeed:1 expected:1 roughly:1 behavior:4 frequently:1 examine:1 actual:7 considering:1 moreover:1 unrelated:1 circuit:1 factorized:1 what:1 compressive:1 unified:2 finding:1 esti:1 certainty:1 amidst:1 exactly:2 laterally:1 demonstrates:1 schwartz:1 control:1 uk:1 unit:2 appear:1 before:1 local:2 consequence:2 mach:2 joining:1 modulation:1 interpolation:1 black:1 might:1 studied:2 co:1 blakemore:1 range:6 statistically:2 illusion:11 procedure:2 gibson:1 majaj:1 thought:1 confidence:19 induce:1 refers:1 suggest:1 cannot:2 marginalize:1 close:2 influence:3 kersten:1 gilbert:2 map:1 demonstrated:2 center:17 williams:1 attention:1 formulate:1 simplicity:1 rule:1 attraction:10 population:2 classic:1 target:37 play:1 anomaly:2 dobbin:1 distinguishing:1 goldberg:1 hypothesis:1 agreement:1 element:5 trend:2 approximated:1 particularly:2 utilized:1 dimmed:1 predicts:3 bottom:3 role:1 capture:1 calculate:1 region:1 connected:1 adv:2 counter:1 pd:1 grateful:1 colinear:6 yuille:2 upon:1 f2:8 whitman:1 multimodal:1 indirect:1 various:1 describe:2 london:1 sejnowski:2 outside:1 outcome:1 neighborhood:1 choosing:1 apparent:1 widely:2 larger:2 statistic:2 rosenfeld:1 seemingly:1 obviously:1 indication:1 pressing:1 kapadia:6 ucl:2 interaction:3 product:1 elastica:5 maximal:4 adaptation:2 aligned:1 conditionalizing:1 organizing:1 mixing:1 description:1 optimum:2 assessing:1 produce:1 generating:1 bimodality:2 bednar:1 object:2 ac:1 completion:3 measured:2 minor:1 strong:2 grating:2 come:2 direction:5 correct:1 centered:2 exploration:1 human:4 f1:12 biological:1 codifying:1 exploring:1 viewer:1 considered:1 normal:1 exp:2 tyler:1 driving:1 vary:1 adopt:2 perceived:2 estimation:11 proc:2 integrates:1 applicable:2 faithfully:1 gaussian:13 mation:1 always:2 super:1 rather:2 modified:1 probabilistically:1 derived:2 acuity:1 likelihood:15 indicates:1 contrast:3 rigorous:1 baseline:9 summarizing:1 helpful:1 inference:6 dayan:2 repulsion:10 leung:2 typically:1 relation:1 favoring:2 expand:1 reproduce:1 pixel:1 issue:4 overall:1 orientation:8 among:1 plan:1 spatial:8 integration:6 psychophysics:2 equal:1 construct:1 field:5 having:1 extraction:1 cartoon:2 represents:2 stuart:1 adelson:1 constitutes:1 nearly:1 filling:1 mimic:1 others:2 stimulus:8 quantitatively:1 few:1 simultaneously:1 intell:2 beck:1 geometry:9 occlusion:1 attempt:1 detection:2 organization:2 intra:1 mixture:1 extreme:1 stocker:2 edge:1 closer:1 mamassian:1 circle:2 minimal:3 modeling:2 queen:1 ordinary:1 addressing:1 deviation:3 euler:1 uniform:1 graphic:1 optimally:1 thoroughly:1 geisler:1 sensitivity:30 ie:6 flanker:71 systematic:1 terrence:1 probabilistic:1 s24:1 clifford:1 central:3 solomon:9 misjudgements:2 cognitive:2 creating:1 american:1 ullman:1 li:2 account:7 crowded:1 explicitly:1 closed:1 bayes:3 parallel:1 masking:1 slope:1 contribution:1 square:2 percept:2 yield:2 serge:1 saliency:1 weak:2 bayesian:22 multiplying:1 researcher:1 cybernetics:1 explain:3 detector:1 reach:1 influenced:1 energy:2 sensorimotor:1 frequency:4 naturally:1 associated:3 repulsed:1 rational:1 dataset:1 recall:1 segmentation:4 formalize:1 subtle:1 positioned:2 mareschal:1 actually:1 elder:1 appears:3 higher:4 reflected:1 maximally:3 wei:3 arranged:3 formulation:2 stage:2 clock:2 hand:2 bulthoff:1 horizontal:1 o:1 perry:1 continuity:1 gray:1 perhaps:1 effect:10 normalized:1 brown:1 ivry:1 spatially:1 eg:3 attractive:2 self:1 whereby:1 evident:1 complete:2 demonstrate:2 angelucci:1 motion:7 image:6 recently:1 common:1 functional:1 spiking:1 tilt:63 association:2 slight:1 extend:1 relating:1 gallogly:1 measurement:1 significant:1 surround:1 eccentric:1 areal:1 cambridge:1 smoothness:25 hess:1 similarly:3 nonlinearity:1 funded:1 reliability:1 calibration:1 zucker:1 similarity:1 cortex:3 inhibition:1 dominant:2 curvature:2 posterior:15 recent:1 showed:2 jolla:2 driven:1 periphery:4 certain:2 verlag:1 ecological:1 continue:1 sceniak:1 joshua:1 seen:3 minimum:1 morgan:2 determine:1 signal:1 clockwise:15 dashed:3 full:5 unimodal:1 simoncelli:2 pnas:1 multiple:1 smooth:14 calculation:1 hhmi:3 offer:2 host:1 equally:1 fraser:1 controlled:1 involving:1 basic:1 variant:1 vision:14 exerts:3 circumstance:1 bimodal:1 cell:1 thornber:1 addition:1 whereas:1 wealth:2 crucial:1 unlike:1 exhibited:1 pooling:1 subject:5 cummulative:1 near:5 presence:3 intermediate:6 constraining:1 enough:1 spiral:1 affect:4 marginalization:1 fit:6 psychology:2 opposite:3 idea:1 shift:6 whether:1 motivated:1 cecchi:1 stereo:1 peter:1 algebraic:1 articulating:1 cause:1 clear:1 detailed:1 chandrajit:1 extensively:1 lehky:1 generate:3 shifted:3 estimated:2 neuroscience:3 correctly:1 affected:2 key:3 salient:1 drawn:1 magnasco:1 sharon:3 v1:1 angle:33 inverse:1 powerful:1 uncertainty:3 fourth:1 tjs:1 almost:1 separation:1 decision:4 confusing:1 completing:2 annual:1 strength:2 constraint:1 scene:7 flat:3 x2:2 aspect:5 speed:1 influential:1 according:1 peripheral:2 combination:1 poor:1 across:2 slightly:1 ophthalmology:1 appealing:1 explained:1 flanking:1 heart:1 taken:1 equation:2 aftereffect:1 segregation:1 mechanism:2 mechanistic:1 repulsive:4 available:1 adopted:1 sigman:1 away:2 appropriate:1 top:4 responding:4 include:1 marginalized:3 xc:3 intrigued:1 psychophysical:2 malik:1 overemphasized:1 arrangement:4 added:2 mumford:1 primary:3 dependence:1 exhibit:1 fovea:2 distance:2 lateral:6 sci:1 philip:1 collected:2 toward:1 modeled:1 relationship:1 westheimer:1 info:2 negative:1 design:2 implementation:1 anal:2 boltzmann:1 vertical:12 observation:1 anti:1 pat:2 hinton:1 incorporated:1 david:1 meier:1 connection:1 extendible:1 conflict:2 pelli:1 address:2 bar:5 alongside:1 below:1 perception:11 suggested:1 dynamical:1 lund:1 oft:1 max:2 including:1 explanation:1 palomares:1 terry:1 shifting:1 natural:4 difficulty:1 scheme:1 eye:1 axis:4 mediated:1 extract:1 occurence:1 deviate:1 prior:36 understanding:2 literature:3 geometric:1 acknowledgement:1 review:1 marginalizing:1 relative:5 law:2 tangent:1 loss:3 interesting:1 generation:1 filtering:1 versus:7 foundation:1 degree:9 consistent:1 principle:9 editor:2 charitable:1 systematically:1 eccv:1 summary:1 surprisingly:1 free:4 bias:31 allow:1 side:2 formal:1 institute:2 distributed:1 curve:13 dimension:2 feedback:1 cortical:2 cumulative:3 rich:1 contour:12 depth:1 made:1 refinement:1 simplified:1 far:5 miikkulainen:1 gestalt:5 kording:1 basri:1 deg:27 global:1 hayes:1 sat:1 belongie:1 grayscale:1 search:1 decade:1 nature:6 ca:2 ignoring:1 flank:1 boxed:2 bearing:1 domain:4 did:1 whole:2 noise:2 knill:1 bajaj:1 allowed:1 x1:2 xu:1 fig:1 psychometric:10 representative:1 ff:1 depicts:1 gatsby:3 salk:4 slow:1 position:20 smoothest:4 lie:1 perceptual:8 weighting:1 third:1 emphasized:2 showing:3 normative:3 dominates:1 grouping:2 evidence:1 effectively:1 importance:1 compulsory:2 illustrates:2 clamping:1 chen:1 gap:1 smoothly:1 wolpert:1 spehar:1 neurophysiological:1 visual:13 positional:4 binding:1 springer:1 corresponds:1 determines:3 consequently:1 towards:2 considerable:1 change:1 determined:6 operates:1 averaging:2 called:3 experimental:3 la:2 puzzling:1 support:1 odelia:2 arises:4 latter:1 rotate:1 phenomenon:3 incorporate:3 tested:1 biol:1 |
1,940 | 2,762 | On the Convergence of Eigenspaces in Kernel
Principal Component Analysis
Laurent Zwald
D?epartement de Math?ematiques,
Universit?e Paris-Sud,
B?at. 425, F-91405 Orsay, France
[email protected]
Gilles Blanchard
Fraunhofer First (IDA),
K?ekul?estr. 7, D-12489 Berlin, Germany
[email protected]
Abstract
This paper presents a non-asymptotic statistical analysis of Kernel-PCA
with a focus different from the one proposed in previous work on this
topic. Here instead of considering the reconstruction error of KPCA we
are interested in approximation error bounds for the eigenspaces themselves. We prove an upper bound depending on the spacing between
eigenvalues but not on the dimensionality of the eigenspace. As a consequence this allows to infer stability results for these estimated spaces.
1
Introduction.
Principal Component Analysis (PCA for short in the sequel) is a widely used tool for data
dimensionality reduction. It consists in finding the most relevant lower-dimension projection of some data in the sense that the projection should keep as much of the variance of
the original data as possible. If the target dimensionality of the projected data is fixed in
advance, say D ? an assumption that we will make throughout the present paper ? the solution of this problem is obtained by considering the projection on the span SD of the first D
eigenvectors of the covariance matrix. Here by ?first D eigenvectors? we mean eigenvectors associated to the D largest eigenvalues counted with multiplicity; hereafter with some
abuse the span of the first D eigenvectors will be called ?D-eigenspace? for short when
there is no risk of confusion.
The introduction of the ?Kernel trick? has allowed to extend this methodology to data
mapped in a kernel feature space, then called KPCA [8]. The interest of this extension
is that, while still linear in feature space, it gives rise to nonlinear interpretation in original
space ? vectors in the kernel feature space can be interpreted as nonlinear functions on the
original space.
For PCA as well as KPCA, the true covariance matrix (resp. covariance operator) is not
known and has to be estimated from the available data, an procedure which in the case of
Kernel spaces is linked to the so-called Nystro? m approximation [13]. The subspace given
as an output is then obtained as D-eigenspace SbD of the empirical covariance matrix or
operator. An interesting question from a statistical or learning theoretical point of view is
then, how reliable this estimate is.
This question has already been studied [10, 2] from the point of view of the reconstruction
error of the estimated subspace. What this means is that (assuming the data is centered in
Kernel space for simplicity) the average reconstruction error (square norm of the distance to
the projection) of SbD converges to the (optimal) reconstruction error of SD and that bounds
are known about the rate of convergence. However, this does not tell us much about the
convergence of SD to SbD ? since two very different subspaces can have a very similar
reconstruction error, in particular when some eigenvalues are very close to each other (the
gap between the eigenvalues will actually appear as a central point of the analysis to come).
In the present work, we set to study the behavior of these D-eigenspaces themselves: we
provide finite sample bounds describing the closeness of the D-eigenspaces of the empirical covariance operator to the true one. There are several broad motivations for this
analysis. First, the reconstruction error alone is a valid criterion only if one really plans to
perform dimensionality reduction of the data and stop there. However, PCA is often used
merely as a preprocessing step and the projected data is then submitted to further processing (which could be classification, regression or something else). In particular for KPCA,
the projection subspace in the kernel space can be interpreted as a subspace of functions on
the original space; one then expects these functions to be relevant for the data at hand and
for some further task (see e.g. [3]). In these cases, if we want to analyze the full procedure (from a learning theoretical sense), it is desirable to have a more precise information
on the selected subspace than just its reconstruction error. In particular, from a learning
complexity point of view, it is important to ensure that functions used for learning stay in
a set of limited complexity, which is ensured if the selected subspace is stable (which is a
consequence of its convergence).
The approach we use here is based on perturbation bounds and we essentially walk in the
steps pioneered by Kolchinskii and Gin?e [7] (see also [4]) using tools of operator perturbation theory [5]. Similar methods have been used to prove consistency of spectral clustering
[12, 11]. An important difference here is that we want to study directly the convergence of
the whole subspace spanned by the first D eigenvectors instead of the separate convergence
of the individual eigenvectors; in particular we are interested in how D acts as a complexity
parameter. The important point in our main result is that it does not: only the gap between
the D-th and the (D + 1)-th eigenvalue comes into account. This means that there in no
increase in complexity (as far as this bound is concerned: of course we cannot exclude that
better bounds can be obtained in the future) between estimating the D-th eigenvector alone
or the span of the first D eigenvectors.
Our contribution in the present work is thus
? to adapt the operator perturbation result of [7] to D-eigenspaces.
? to get non-asymptotic bounds on the approximation error of Kernel-PCA
eigenspaces thanks to the previous tool.
In section 2 we introduce shortly the notation, explain the main ingredients used and obtain
a first bound based on controlling separately the first D eigenvectors, and depending on the
dimension D. In section 3 we explain why the first bound is actually suboptimal and derive
an improved bound as a consequence of an operator perturbation result that is more adapted
to our needs and deals directly with the D-eigenspace as a whole. Section 4 concludes and
discusses the obtained results. Mathematical proofs are found in the appendix.
2
First result.
Notation. The interest variable X takes its values in some measurable space X , following
the distribution P . We consider KPCA and are therefore primarily interested in the mapping of X into a reproducing kernel Hilbert space H with kernel function k through the
feature mapping ?(x) = k(x, ?). The objective of the kernel PCA procedure is to recover a
D-dimensional subspace SD of H such that the projection of ?(X) on SD has maximum
averaged squared norm.
All operators considered in what follows are Hilbert-Schmidt and the norm considered for
these operators will be the Hilbert-Schmidt norm unless precised otherwise. Furthermore
we only consider symmetric nonnegative operators, so that they can be diagonalized and
have a discrete spectrum.
Let C denote the covariance operator of variable ?(X). To simplify notation we assume
that nonzero eigenvalues ?1 > ?2 > . . . of C are all simple (This is for convenience only.
In the conclusion we discuss what changes have to be made if this is not the case). Let
?1 , ?2 , . . . be the associated eigenvectors. It is well-known that the optimal D-dimensional
reconstruction space is SD = span{?1 , . . . , ?D }. The KPCA procedure approximates this
objective by considering the empirical covariance operator, denoted Cn , and the subspace
SbD spanned by its first D eigenvectors. We denote PSD , PSbD the orthogonal projectors on
these spaces.
A first bound. Broadly speaking, the main steps required to obtain the type of result we
are interested in are
1. A non-asympotic bound on the (Hilbert-Schmidt) norm of the difference between
the empirical and the true covariance operators;
2. An operator perturbation result bounding the difference between spectral projectors of two operators by the norm of their difference.
The combination of these two steps leads to our goal. The first step consists in the following
Lemma coming from [9]:
Lemma 1 (Corollary 5 of [9]) Supposing that supx?X k(x, x) ? M , with probability
greater than 1 ? e?? ,
r !
2M
?
kCn ? Ck ? ?
1+
.
2
n
As for the second step, [7] provides the following perturbation bound (see also e.g. [12]):
Theorem 2 (Simplified Version of [7], Theorem 5.2 ) Let A be a symmetric positive
Hilbert-Schmidt operator of the Hilbert space H with simple positive eigenvalues ? 1 >
?2 > . . . For an integer r such that ?r > 0, let ?er = ?r ? ?r?1 where ?r = 21 (?r ? ?r+1 ).
Let B ? HS(H) be another symmetric operator such that kBk < ?er /2 and (A + B) is
still a positive operator with simple nonzero eigenvalues.
Let Pr (A) (resp. Pr (A + B)) denote the orthogonal projector onto the subspace spanned
by the r-th eigenvector of A (resp. (A + B)). Then, these projectors satisfy:
kPr (A) ? Pr (A + B)k ?
2kBk
.
?er
Remark about the Approximation Error of the Eigenvectors: let us recall that a control over the Hilbert-Schmidt norm of the projections onto eigenspaces imply a control on
the approximation errors of the eigenvectors themselves. Indeed, let ?r , ?r denote the (normalized) r-th eigenvectors of the operators above with signs chosen so that h? r , ?r i > 0.
Then
2
2
2
kP?r ? P?r k = 2(1 ? h?r , ?r i ) ? 2(1 ? h?r , ?r i) = k?r ? ?r k .
Now, the orthogonal projector on the direct sum of the first D eigenspaces is the sum
PD
r=1 Pr . Using the triangle inequality, and combining Lemma 1 and Theorem 2, we
conclude that with probability at least 1 ? e?? the following holds:
!
r !
D
X
4M
?
?1
?
?er
1+
,
PSD ? PSbD
?
2
n
r=1
q 2
provided that n ? 16M 2 1 + 2? (sup1?r?D ?er?2 ) . The disadvantage of this bound
is that we are penalized on the one hand by the (inverse) gaps between the eigenvalues, and
on the other by the dimension D (because we have to sum the inverse gaps from 1 to D).
In the next section we improve the operator perturbation bound to get an improved result
where only the gap ?D enters into account.
3
Improved Result.
We first prove the following variant on the operator perturbation property which better corresponds to our needs by taking directly into account the projection on the first D eigenvectors at once. The proof uses the same kind of techniques as in [7].
Theorem 3 Let A be a symmetric positive Hilbert-Schmidt operator of the Hilbert space
H with simple nonzero eigenvalues ?1 > ?2 > . . . Let D > 0 be an integer such that
?D > 0, ?D = 12 (?D ? ?D+1 ). Let B ? HS(H) be another symmetric operator such that
kBk < ?D /2 and (A + B) is still a positive operator. Let P D (A) (resp. P D (A + B))
denote the orthogonal projector onto the subspace spanned by the first D eigenvectors A
(resp. (A + B)). Then these satisfy:
kP D (A) ? P D (A + B)k ?
kBk
.
?D
(1)
This then gives rise to our main result on KPCA:
Theorem 4 Assume that supx?X k(x, x) ? M . Let SD , SbD be the subspaces spanned
by the first D eigenvectors of C, resp. Cn defined earlier. Denoting ?1 > ?2 > . . . the
eigenvalues of C, if D > 0 is such that ?D > 0, put ?D = 21 (?D ? ?D+1 ) and
r !
2M
?
1+
.
BD =
?D
2
2
Then provided that n ? BD
, the following bound holds with probability at least 1 ? e?? :
B
D
(2)
PSD ? PSbD
? ? .
n
This entails in particular
n
o
1
?
SbD ? g + h, g ? SD , h ? SD
, khkHk ? 2BD n? 2 kgkHk .
(3)
The important point here is that the approximation error now only depends on D through
the (inverse) gap between the D-th and (D + 1)-th eigenvalues. Note that using the results
of section 2, we would have obtained exactly the same bound for estimating the D-th
eigenvector only ? or even a worse bound since ?eD = ?D ? ?D?1 appears in this case.
Thus, at least from the point of view of this technique (which could still yield suboptimal
bounds), there is no increase of complexity between estimating the D-th eigenvector alone
and estimating the span of the first D eigenvectors.
Note that the inclusion (3) can be interpreted geometrically by saying that for any vector in
SbD , the ?
tangent of the angle between this vector and its projection on SD is upper bounded
by BD / n, which we can interpret as a stability property.
Comment about the Centered Case. In the actual (K)PCA procedure, the data is actually first empirically recentered, so that one has to consider the centered covariance operator
C and its empirical counterpart C n . A result similar to Theorem 4 also holds in this case
(up to some additional constant factors). Indeed, a result similar to Lemma 1 holds for the
recentered operators [2]. Combined again with Theorem 3, this allows to come to similar
conclusions for the ?true? centered KPCA.
4
Conclusion and Discussion
In this paper, finite sample size confidence bounds of the eigenspaces of Kernel-PCA (the
D-eigenspaces of the empirical covariance operator) are provided using tools of operator
perturbation theory. This provides a first step towards an in-depth complexity analysis of
algorithms using KPCA as pre-processing, and towards taking into account the randomness
of the obtained models (e.g. [3]). We proved a bound in which the complexity factor for
estimating the eigenspace SD by its empirical counterpart depends only on the inverse gap
between the D-th and (D + 1)-th eigenvalues. In addition to the previously cited works,
we take into account the centering of the data and obtain comparable rates.
In this work we assumed for simplicity of notation the eigenvalues to be simple. In the case
the covariance operator C has nonzero eigenvalues with multiplicities m1 , m2 , . . . possibly
larger than one, the analysis remains the same except for one point: we have to assume that
the dimension D of the subspaces considered is of the form m1 + ? ? ? + mr for a certain
r. This could seem restrictive in comparison with the results obtained for estimating the
sum of the first D eigenvalues themselves [2] (which is linked to the reconstruction error
in KPCA) where no such restriction appears. However, it should be clear that we need
this restriction when considering D?eigenspaces themselves since the target space has to
be unequivocally defined, otherwise convergence cannot occur. Thus, it can happen in
this special case that the reconstruction error converges while the projection space itself
does not. Finally, a common point of the two analyses (over the spectrum and over the
eigenspaces) lies in the fact that the bounds involve an inverse gap in the eigenvalues of the
true covariance operator.
Finally, how tight are these bounds and do they at least carry some correct qualitative information about the behavior of the eigenspaces? Asymptotic results (central limit Theorems)
in [6, 4] always provide the correct goal to shoot for since they actually give the limit distributions of these quantities. They imply that there is still important ground to cover before
bridging the gap between asymptotic and non-asymptotic. This of course opens directions
for future work.
Acknowledgements: This work was supported in part by the PASCAL Network of Excellence (EU # 506778).
A
Appendix: proofs.
Proof of Lemma 1. This lemma
Pn is proved in [9]. We give a short proof for the sake of
completness. kCn ? Ck = k n1 i=1 CXi ? E [CX ] k with kCX k = k?(X) ? ?(X)? k =
k(X, X) ? M . We can apply the bounded difference inequality to the variable kCn ? Ck,
q
?
so that with probability greater than 1 ? e?? , kCn ? Ck ? E [kCn ? Ck] + 2M 2n
.
1
1 Pn
2 2
, and
Moreover, by Jensen?s inequality
[kCn ? Ck] ? E k n i=1 CXi ? E [CX ] k
1EP
n
1
2
simple calculations leads to E k n i=1 CXi ? E [CX ] k = n E kCX ? E [CX ] k2 ?
4M 2
n . This concludes the proof of lemma 1.
Proof of Theorem 3. The variation of this proof with respect to Theorem 5.2 in [7] is (a)
to work directly in a (infinite-dimensional) Hilbert space, requiring extra caution for some
details and (b) obtaining an improved bound by considering D-eigenspaces at once.
The key property of Hilbert-Schmidt operators allowing to work directly in a infinite dimensional setting is that HS(H) is a both right and left ideal of Lc (H, H), the Banach
space of all continuous linear operators of H endowed with the operator norm k.k op . Indeed, ? T ? HS(H), ?S ? Lc (H, H), T S and ST belong to HS(H) with
kT Sk ? kT k kSkop and kST k ? kT k kSkop .
(4)
The spectrum of an Hilbert-Schmidt operator T is denoted ?(T ) and the sequence of eigenvalues in non-increasing order is denoted ?(T ) = (?1 (T ) ? ?2 (T ) ? . . .) . In the following, P D (T ) denotes the orthogonal projector onto the D-eigenspace of T .
The Hoffmann-Wielandt inequality in infinite dimensional setting[1] yields that:
k?(A) ? ?(A + B)k`2 ? kBk ?
?D
.
2
(5)
implying in particular that
?i > 0,
|?i (A) ? ?i (A + B)| ?
?D
.
2
Results found in [5] p.39 yield the formula
Z
1
D
D
P (A) ? P (A + B) = ?
(RA (z) ? RA+B (z))dz ? Lc (H, H) .
2i? ?
(6)
(7)
where RA (z) = (A ? z Id)?1 is the resolvent of A, provided that ? is a simple closed
curve in C enclosing exactly the first D eigenvalues of A and (A + B). Moreover, the same
reference (p.60) states that for ? in the complementary of ?(A),
kRA (?)kop = dist(?, ?(A))
?1
.
(8)
The proof of the theorem now relies on the simple choice for the closed curve ? in (7),
drawn in the picture below and consisting of three straight lines and a semi-circle of radius
L. For all L > ?2D , ? intersect neither the eigenspectrum of A (by equation (6)) nor the
eigenspectrum of A + B. Moreover, the eigenvalues of A (resp. A + B) enclosed by ? are
exactly ?1 (A), . . . , ?D (A) (resp. ?1 (A + B), . . . , ?D (A + B)).
Moreover, for z ? ?, T (z) = RA (z) ? RA+B (z) = ?RA+B (z)BRA (z) belongs to
HS(H) and depends continuously on z by (4). Consequently,
Z b
1
kP D (A) ? P D (A + B)k ?
k(RA ? RA+B )(?(t))k |? 0 (t)|dt .
2? a
PN
Let SN = n=0 (?1)n (RA (z)B)n RA (z). RA+B (z) = (Id + RA (z)B)?1 RA (z) and,
for z ? ? and L > ?D ,
kRA (z)Bkop ? kRA (z)kop kBk ?
?D
1
? ,
2 dist(z, ?(A))
2
?
L
L
?D
0
?
D+1
?D
?2
?D
?1
?D
?D
?D
2
2
2
L
k.kop
imply that SN ?? RA+B (z) (uniformly for z ? ?). Using property (4), since B ?
k.k
HS(H), SN BRA (z) ?? RA+B (z)BRA (z) = RA+B (z) ? RA (z) . Finally,
X
RA (z) ? RA+B (z) =
(?1)n (RA (z)B)n RA (z)
n?1
where the series converges in HS(H), uniformly in z ? ?. Using again property (4) and
(8) implies
X
X
kBkn
n
k(RA ? RA+B )(?(t))k ?
kRA (?(t))kn+1
kBk
?
op
distn+1 (?(t), ?(A))
n?1
Finally, since for L > ?D , kBk ?
?D
2
n?1
?
dist(?(t),?(A))
,
2
Z b
kBk
1
|? 0 (t)|dt .
2
? a dist (?(t), ?(A))
Splitting the last integral into four parts according to the definition of the contour ?, we
obtain
Z b
2arctan( ?LD )
1
?1 (A) ? (?D (A) ? ?D )
?
0
|?
(t)|dt
?
+ +2
,
2 (?(t), ?(A))
dist
?
L
L2
D
a
and letting L goes to infinity leads to the result.
kP D (A) ? P D (A + B)k ?
Proof of Theorem 4. Lemma 1 and Theorem 3 yield inequality (2). Together with as2
sumption n ? BD
it implies kPSD ? PSbD k ? 21 . Let f ? SbD : f = PSD (f ) + PSD
? (f ) .
Lemma 5 below with F = SD and G = SbD , and the fact that the operator norm is bounded
by the Hilbert-Schmidt norm imply that
4
2
kPSD
? kPSD ? PSbD k2 kPSD (f )k2Hk .
? (f )kH
k
3
Gathering the different inequalities, Theorem 4 is proved.
Lemma 5 Let F and G be two vector subspaces of H such that kPF ? PG kop ? 21 . Then
the following bound holds:
4
? f ? G , kPF ? (f )k2H ? kPF ? PG k2op kPF (f )k2H .
3
Proof of Lemma 5.
For f ? G, we have PG (f ) = f , hence
kPF ? (f )k2 = kf ? PF (f )k2 = k(PG ? PF )(f )k2
? kPF ? PG k2op kf k2
= kPF ? PG k2op kPF (f )k2 + kPF ? (f )k2
gathering the terms containing kPF ? (f )k2 on the left-hand side and using kPF ?PG k2op ?
1/4 leads to the conclusion.
References
[1] R. Bhatia and L. Elsner. The Hoffman-Wielandt inequality in infinite dimensions.
Proc.Indian Acad.Sci(Math. Sci.) 104 (3), p. 483-494, 1994.
[2] G. Blanchard, O. Bousquet, and L. Zwald. Statistical Properties of Kernel Principal Component Analysis. Proceedings of the 17th. Conference on Learning Theory
(COLT 2004), p. 594?608. Springer, 2004.
[3] G. Blanchard, P. Massart, R. Vert, and L. Zwald. Kernel projection machine: a new
tool for pattern recognition. Proceedings of the 18th. Neural Information Processing
System (NIPS 2004), p. 1649?1656. MIT Press, 2004.
[4] J. Dauxois, A. Pousse, and Y. Romain. Asymptotic theory for the Principal Component Analysis of a vector random function: some applications to statistical inference.
Journal of multivariate analysis 12, 136-154, 1982.
[5] T. Kato. Perturbation Theory for Linear Operators. New-York: Springer-Verlag,
1966.
[6] V. Koltchinskii. Asymptotics of spectral projections of some random matrices approximating integral operators. Progress in Probability, 43:191?227, 1998.
[7] V. Koltchinskii and E. Gin?e. Random matrix approximation of spectra of integral
operators. Bernoulli, 6(1):113?167, 2000.
[8] B. Sch?olkopf, A. J. Smola, and K.-R. M?uller. Nonlinear component analysis as a
kernel eigenvalue problem. Neural Computation, 10:1299?1319, 1998.
[9] J. Shawe-Taylor and N. Cristianini. Estimating the moments of a random vector with
applications. Proceedings of the GRETSI 2003 Conference, p. 47-52, 2003.
[10] J. Shawe-Taylor, C. Williams, N. Cristianini, and J. Kandola. On the eigenspectrum
of the Gram matrix and the generalisation error of Kernel PCA. IEEE Transactions
on Information Theory 51 (7), p. 2510-2522, 2005.
[11] U. von Luxburg, M. Belkin, and O. Bousquet. Consistency of spectral clustering.
Technical Report 134, Max Planck Institute for Biological Cybernetics, 2004.
[12] U. von Luxburg, O. Bousquet, and M. Belkin. On the convergence of spectral clustering on random samples: the normalized case. Proceedings of the 17th Annual
Conference on Learning Theory (COLT 2004), p. 457?471. Springer, 2004.
[13] C. K. I. Williams and M. Seeger. The effect of the input density distribution on
kernel-based classifiers. Proceedings of the 17th International Conference on Machine Learning (ICML), p. 1159?1166. Morgan Kaufmann, 2000.
| 2762 |@word h:8 version:1 norm:10 k2hk:1 open:1 covariance:12 pg:7 ld:1 moment:1 reduction:2 carry:1 series:1 epartement:1 hereafter:1 denoting:1 diagonalized:1 ida:1 bd:5 kpf:11 happen:1 alone:3 implying:1 selected:2 short:3 provides:2 math:3 arctan:1 mathematical:1 direct:1 qualitative:1 prove:3 consists:2 introduce:1 excellence:1 ra:23 indeed:3 behavior:2 themselves:5 dist:5 nor:1 sud:1 sbd:9 actual:1 pf:2 considering:5 blanchar:1 increasing:1 provided:4 estimating:7 notation:4 bounded:3 moreover:4 eigenspace:6 what:3 kind:1 interpreted:3 eigenvector:4 caution:1 finding:1 act:1 exactly:3 universit:1 ensured:1 k2:9 gretsi:1 control:2 classifier:1 appear:1 planck:1 positive:5 before:1 sd:12 limit:2 consequence:3 acad:1 id:2 laurent:2 abuse:1 koltchinskii:2 studied:1 limited:1 averaged:1 nystro:1 procedure:5 asymptotics:1 intersect:1 empirical:7 vert:1 projection:12 confidence:1 pre:1 get:2 cannot:2 close:1 convenience:1 operator:37 onto:4 put:1 risk:1 zwald:4 restriction:2 measurable:1 projector:7 dz:1 go:1 williams:2 simplicity:2 splitting:1 m2:1 spanned:5 stability:2 variation:1 resp:8 target:2 controlling:1 pioneered:1 us:1 trick:1 romain:1 recognition:1 ep:1 enters:1 eu:1 pd:1 complexity:7 cristianini:2 tight:1 triangle:1 kp:4 tell:1 bhatia:1 widely:1 larger:1 say:1 recentered:2 otherwise:2 itself:1 sequence:1 eigenvalue:21 reconstruction:10 coming:1 fr:1 relevant:2 combining:1 kato:1 kh:1 olkopf:1 convergence:8 converges:3 depending:2 derive:1 op:2 progress:1 come:3 implies:2 direction:1 radius:1 correct:2 centered:4 really:1 sumption:1 biological:1 extension:1 khkhk:1 hold:5 considered:3 ground:1 k2h:2 mapping:2 proc:1 largest:1 tool:5 hoffman:1 uller:1 mit:1 always:1 ck:6 pn:3 corollary:1 focus:1 bernoulli:1 seeger:1 sense:2 inference:1 france:1 fhg:1 germany:1 interested:4 classification:1 colt:2 pascal:1 denoted:3 plan:1 special:1 once:2 broad:1 icml:1 future:2 report:1 simplify:1 primarily:1 belkin:2 kandola:1 individual:1 psud:1 consisting:1 n1:1 psd:5 interest:2 kt:3 kpr:1 integral:3 eigenspaces:14 orthogonal:5 unless:1 taylor:2 walk:1 circle:1 theoretical:2 earlier:1 cover:1 disadvantage:1 kpca:10 expects:1 kn:1 supx:2 combined:1 thanks:1 cited:1 st:1 density:1 international:1 stay:1 sequel:1 together:1 continuously:1 squared:1 central:2 again:2 von:2 containing:1 possibly:1 worse:1 account:5 exclude:1 de:2 blanchard:3 satisfy:2 depends:3 resolvent:1 view:4 closed:2 linked:2 analyze:1 recover:1 contribution:1 square:1 variance:1 kaufmann:1 yield:4 cybernetics:1 straight:1 randomness:1 submitted:1 explain:2 ed:1 definition:1 centering:1 associated:2 proof:11 stop:1 proved:3 recall:1 dimensionality:4 hilbert:13 actually:4 appears:2 dt:3 methodology:1 improved:4 furthermore:1 just:1 smola:1 hand:3 nonlinear:3 completness:1 effect:1 normalized:2 true:5 requiring:1 counterpart:2 hence:1 symmetric:5 nonzero:4 deal:1 criterion:1 confusion:1 estr:1 shoot:1 common:1 empirically:1 banach:1 extend:1 interpretation:1 approximates:1 m1:2 interpret:1 belong:1 consistency:2 inclusion:1 shawe:2 stable:1 entail:1 something:1 multivariate:1 belongs:1 verlag:1 certain:1 kcx:2 inequality:7 morgan:1 greater:2 additional:1 mr:1 bra:3 elsner:1 semi:1 full:1 desirable:1 infer:1 technical:1 adapt:1 calculation:1 variant:1 regression:1 dauxois:1 essentially:1 kernel:18 addition:1 want:2 separately:1 spacing:1 else:1 sch:1 extra:1 massart:1 comment:1 supposing:1 seem:1 integer:2 orsay:1 ideal:1 concerned:1 suboptimal:2 cn:2 pca:9 bridging:1 speaking:1 york:1 remark:1 clear:1 eigenvectors:17 involve:1 sign:1 estimated:3 sup1:1 broadly:1 discrete:1 key:1 four:1 drawn:1 neither:1 merely:1 geometrically:1 sum:4 luxburg:2 inverse:5 angle:1 throughout:1 saying:1 appendix:2 comparable:1 bound:26 nonnegative:1 annual:1 adapted:1 occur:1 infinity:1 as2:1 sake:1 bousquet:3 bkop:1 span:5 according:1 combination:1 kbk:9 multiplicity:2 pr:4 kra:4 gathering:2 equation:1 previously:1 remains:1 describing:1 discus:2 letting:1 available:1 endowed:1 apply:1 spectral:5 schmidt:9 ematiques:1 shortly:1 original:4 denotes:1 clustering:3 ensure:1 restrictive:1 approximating:1 objective:2 question:2 already:1 quantity:1 hoffmann:1 gin:2 subspace:15 distance:1 separate:1 mapped:1 berlin:1 sci:2 topic:1 eigenspectrum:3 assuming:1 kst:1 rise:2 enclosing:1 perform:1 gilles:1 upper:2 allowing:1 finite:2 precise:1 perturbation:10 reproducing:1 paris:1 required:1 nip:1 k2op:4 below:2 pattern:1 reliable:1 max:1 improve:1 imply:4 picture:1 concludes:2 fraunhofer:1 sn:3 acknowledgement:1 tangent:1 l2:1 kf:2 asymptotic:6 interesting:1 enclosed:1 ingredient:1 unequivocally:1 course:2 penalized:1 supported:1 last:1 side:1 institute:1 taking:2 curve:2 dimension:5 depth:1 valid:1 gram:1 contour:1 made:1 projected:2 preprocessing:1 simplified:1 counted:1 far:1 transaction:1 keep:1 conclude:1 assumed:1 spectrum:4 continuous:1 sk:1 why:1 obtaining:1 main:4 motivation:1 whole:2 bounding:1 allowed:1 complementary:1 cxi:2 lc:3 lie:1 kcn:6 theorem:14 formula:1 kop:4 er:5 jensen:1 closeness:1 gap:9 cx:5 wielandt:2 springer:3 corresponds:1 relies:1 goal:2 consequently:1 towards:2 change:1 infinite:4 except:1 uniformly:2 generalisation:1 principal:4 lemma:11 called:3 indian:1 |
1,941 | 2,763 | A Probabilistic Interpretation of SVMs with an
Application to Unbalanced Classification
Yves Grandvalet ?
Heudiasyc, CNRS/UTC
60205 Compi`egne cedex, France
[email protected]
Johnny Mari?ethoz Samy Bengio
IDIAP Research Institute
1920 Martigny, Switzerland
{marietho,bengio}@idiap.ch
Abstract
In this paper, we show that the hinge loss can be interpreted as the
neg-log-likelihood of a semi-parametric model of posterior probabilities.
From this point of view, SVMs represent the parametric component of a
semi-parametric model fitted by a maximum a posteriori estimation procedure. This connection enables to derive a mapping from SVM scores
to estimated posterior probabilities. Unlike previous proposals, the suggested mapping is interval-valued, providing a set of posterior probabilities compatible with each SVM score. This framework offers a new
way to adapt the SVM optimization problem to unbalanced classification, when decisions result in unequal (asymmetric) losses. Experiments
show improvements over state-of-the-art procedures.
1
Introduction
In this paper, we show that support vector machines (SVMs) are the solution of a relaxed
maximum a posteriori (MAP) estimation problem. This relaxed problem results from fitting
a semi-parametric model of posterior probabilities. This model is decomposed into two
components: the parametric component, which is a function of the SVM score, and the
non-parametric component which we call a nuisance function. Given a proper binding of
the nuisance function adapted to the considered problem, this decomposition enables to
concentrate on selected ranges of the probability spectrum. The estimation process can
thus allocate model capacity to the neighborhoods of decision boundaries.
The connection to semi-parametric models provides a probabilistic interpretation of SVM
scores, which may have several applications, such as estimating confidences over the predictions, or dealing with unbalanced losses. (which occur in domains such as diagnosis,
intruder detection, etc). Several mappings relating SVM scores to probabilities have already been proposed (Sollich 2000, Platt 2000), but they are subject to arbitrary choices,
which are avoided here by their integration to the nuisance function.
The paper is organized as follows. Section 2 presents the semi-parametric modeling approach; Section 3 shows how we reformulate SVM in this framework; Section 4 proposes
several outcomes of this formulation, including a new method to handle unbalanced losses,
which is tested empirically in Section 5. Finally, Section 6 briefly concludes the paper.
?
This work was supported in part by the IST Programme of the European Community, under the
PASCAL Network of Excellence IST-2002-506778. This publication only reflects the authors? views.
2
Semi-Parametric Classification
We address the binary classification problem of estimating a decision rule from a learning
set Ln = {(xi , yi )}ni=1 , where the ith example is described by the pattern xi ? X and
the associated response yi ? {?1, 1}. In the framework of maximum likelihood estimation, classification can be addressed either via generative models, i.e. models of the joint
distribution P (X, Y ), or via discriminative methods modeling the conditional P (Y |X).
2.1
Complete and Marginal Likelihood, Nuisance Functions
Let p(1|x; ?) denote the model of P (Y = 1|X = x), p(x; ?) the model of P (X) and ti
the binary response variable such that ti = 1 when yi = 1 and ti = 0 when yi = ?1.
Assuming independent examples, the complete log-likelihood can be decomposed as
X
L(?, ?; Ln ) =
ti log(p(1|xi ; ?)) + (1 ? ti ) log(1 ? p(1|xi ; ?)) + log(p(xi ; ?)) , (1)
i
where the two first terms of the right-hand side represent the marginal or conditional likelihood, that is, the likelihood of p(1|x; ?).
For classification purposes, the parameter ? is not relevant, and may thus be qualified as a
nuisance parameter (Lindsay 1985). When ? can be estimated independently of ?, maximizing the marginal likelihood provides the estimate returned by maximizing the complete
likelihood with respect to ? and ?. In particular, when no assumption whatsoever is made
on P (X), maximizing the conditional likelihood amounts to maximize the joint likelihood
(McLachlan 1992). The density of inputs is then considered as a nuisance function.
2.2
Semi-Parametric Models
Again, for classification purposes, estimating P (Y |X) may be considered as too demanding. Indeed, taking a decision only requires the knowledge of sign(2P (Y = 1|X = x)?1).
We may thus consider looking for the decision rule minimizing the empirical classification
error, but this problem is intractable for non-trivial models of discriminant functions.
Here, we briefly explore how semi-parametric models (Oakes 1988) may be used to reduce the modelization effort as compared to the standard likelihood approach. For this,
we consider a two-component semi-parametric model of P (Y = 1|X = x), defined as
p(1|x; ?) = g(x; ?) + ?(x), where the parametric component g(x; ?) is the function of interest, and where the non-parametric component ? is a constrained nuisance function. Then,
we address the maximum likelihood estimation of the semi-parametric model p(1|x; ?)
X
?
?
ti log(p(1|xi ; ?)) + (1 ? ti ) log(1 ? p(1|xi ; ?))
min ?
?
?
? ?,?
i
(2)
s. t. p(1|x; ?) = g(x; ?) + ?(x)
?
?
0 ? p(1|x; ?) ? 1
?
?
?? (x) ? ?(x) ? ?+ (x)
where ?? and ?+ are user-defined functions, which place constraints on the non-parametric
component ?. According to these constraints, one pursues different objectives, which can
be interpreted as either weakened or focused versions of the original problem of estimating
precisely P (Y |X) on the whole range [0, 1].
At the one extreme, when ?? = ?+ , one recovers a parametric maximum likelihood problem, where the estimate of posterior probabilities p(1|x; ?) is simply g(x; ?) shifted by the
baseline function ?. At the other extreme, when ?? (x) ? ?g(x) and ?+ (x) ? 1 ? g(x),
p(1|?; ?) perfectly explains (interpolates) any training sample for any ?, and the optimization problem in ? is ill-posed. Note that the optimization problem in ? is always ill-posed,
but this is not of concern as we do not wish to estimate the nuisance function.
+
p(1|x)
? (x)/? (x)
0
0.5
-
-
+
? (x)/? (x)
+?
0
??
1
p(1|x)
0.5
1
1??
0 ?
1?? 1
g(x)
?
0
0 ?
1?? 1
g(x)
?0.5
0
0.5
g(x)
1
0
0
0.5
1
g(x)
Figure 1: Two examples of ?? (x) (dashed) and ?+ (x) (plain) vs. g(x) and resulting ?-tube
of possible values for the estimate of P (Y = 1|X = x) (gray zone) vs. g(x).
Generally, as ? is not estimated, the estimate of posterior probabilities p(1|x; ?) is only
known to lie within the interval [g(x; ?) + ?? (x), g(x; ?) + ?+ (x)]. In what follows, we
only consider functions ?? and ?+ expressed as functions of the argument g(x), for which
the interval can be recovered from g(x) alone. We also require ?? (x) ? 0 ? ?+ (x), in
order to ensure that g(x; ?) is an admissible value of p(1|x; ?).
Two simple examples are displayed in Figure 1. The two first graphs represent ?? and ?+
designed to estimate posterior probabilities up to precision ?, and the corresponding ?-tube
of admissible estimates knowing g(x). The two last graphs represent the same functions
for ?? and ?+ defined to focus on the only relevant piece of information regarding decision:
estimating where P (Y |X) is above 1/2. 1
2.3
Estimation of the Parametric Component
The definitions of ?? and ?+ affect the estimation of the parametric component. Regarding
?, when the values of g(x; ?) + ?? (x) and g(x; ?) + ?+ (x) lie within [0, 1], problem (2)
is equivalent to the following relaxed maximum likelihood problem
?
X
? min ?
ti log(g(xi ; ?) + ?i ) + (1 ? ti ) log(1 ? g(xi ; ?) ? ?i )
?,?
(3)
i
? s. t. ?? (x ) ? ? ? ?+ (x ) i = 1, . . . , n
i
i
i
where ? is an n-dimensional vector of slack variables. The problem is qualified as relaxed
compared to the the maximum likelihood estimation of posterior probabilities by g(xi ; ?),
because modeling posterior probabilities by g(xi ; ?) + ?i is a looser objective.
The monotonicity of the objective function with respect to ?i implies that the constraints
?? (xi ) ? ?i and ?i ? ?+ (xi ) are saturated at the solution of (3) for ti = 0 or ti = 1 respectively. Thus, the loss in (3) is the neg-log-likelihood of the lower or the upper bound on
p(1|xi ; ?) respectively. Provided that g, ?? and ?+ are defined such that ?? (x) ? ?+ (x),
0 ? g(x) + ?? (x) ? 1 and 0 ? g(x) + ?+ (x) ? 1, the optimization problem with respect
to ? reduces to
X
min ?
ti log(g(xi ; ?) + ?+ (xi )) + (1 ? ti ) log(1 ? g(xi ; ?) ? ?? (xi )) . (4)
?
i
Figure 2 displays the losses for positive examples corresponding to the choices of ?? and
?+ depicted in Figure 1 (the losses are symmetrical around 0.5 for negative examples).
Note that the convexity of the objective function with respect to g depends on the choices
of ?? and ?+ . One can show that, providing ?+ and ?? are respectively concave and convex
functions of g, then the loss (4) is convex in g.
When ?? (x) ? 0 ? ?+ (x), g(x) is an admissible estimate of P (Y = 1|x). However,
the relaxed loss (4) is optimistic, below the neg-log-likelihood of g. This optimism usually
1
Of course, this naive attempt to minimize the training classification error is doomed to failure.
Reformulating the problem does not affect its convexity: it remains NP-hard.
L(g(x),1)
L(g(x),1)
0
1?? 1
0
g(x)
0.5
1
g(x)
Figure 2: Losses for positive examples (plain) and neg-log-likelihood of g(x) (dotted) vs.
g(x). Left: for the function ?+ displayed on the left-hand side of Figure 1; right: for the
function ?+ displayed on the right-hand side of Figure 1.
results in a non-consistent estimation of posterior probabilities (i.e g(x) does not converge
towards P (Y = 1|X = x) as the sample size goes to infinity), a common situation in
semi-parametric modeling (Lindsay 1985). This lack of consistency should not be a concern here, since the non-parametric component is purposely introduced to address a looser
estimation problem. We should therefore restrict consistency requirements to the primary
goal of having posterior probabilities in the ?-tube [g(x) + ?? (x), g(x) + ?+ (x)].
3
Semi-Parametric Formulation of SVMs
Several authors pointed the closeness of SVM and the MAP approach to Gaussian processes (Sollich (2000) and references therein). However, this similarity does not provide a
proper mapping from SVM scores to posterior probabilities. Here, we resolve this difficulty
thanks to the additional degrees of freedom provided by semi-parametric modelling.
3.1
SVMs and Gaussian Processes
In its primal Lagrangian formulation, the SVM optimization problem reads
X
1
[1 ? yi (f (xi ) + b)]+ ,
min kf k2H + C
f,b 2
i
(5)
where H is a reproducing kernel Hilbert space with norm k ? kH , C is a regularization
parameter and [f ]+ = max(f, 0).
The penalization term in (5) can be interpreted as a Gaussian prior on f , with a covariance
function proportional to the reproducing kernel of H (Sollich 2000). Then, the interpretation of the hinge loss as a marginal log-likelihood requires to identify an affine function of
the last term of (5) with the two first terms of (1). We thus look for two constants c0 and
c1 6= 0, such that, for all values of f (x) + b, there exists a value 0 ? p(1|x) ? 1 such that
p(1|x) = exp ?(c0 + c1 [1 ? (f (x) + b)]+ )
.
(6)
1 ? p(1|x) = exp ?(c0 + c1 [1 + (f (x) + b)]+ )
The system (6) has a solution over the whole range of possible values of f (x) + b if and
only if c0 = log(2) and c1 = 0. Thus, the SVM optimization problem does not implement
the MAP approach to Gaussian processes.
To proceed with a probabilistic interpretation of SVMs, Sollich (2000) proposed a normalized probability model. The normalization functional was chosen arbitrarily, and the
consequences of this choice on the probabilistic interpretation was not evaluated. In what
follows, we derive an imprecise mapping, with interval-valued estimates of probabilities,
representing the set of all admissible semi-parametric formulations of SVM scores.
3.2
SVMs and Semi-Parametric Models
With the semi-parametric models of Section 2.2, one has to identify an affine function of
the hinge loss with the two terms of (4). Compared to the previous situation, one has the
2.5
2
0.6
0.4
0.2
0
?6
1
1.5
p(1|x)
L(g(x),1)
p(1|x)
1
0.8
1
0.5
0.5
?4
?2
0
2
4
f(x)+b
6
0
?6
?4
?2
0
f(x)+b
2
4
6
0
0
0.25
0.5
0.75
1
g(x)
Figure 3: Left: lower (dashed) and upper (plain) posterior probabilities [g(x) +
?? (x), g(x) + ?+ (x)] vs. SVM scores f (x) + b; center: corresponding neg-log-likelihood
of g(x) for positive examples vs. f (x)+b. right: lower (dashed) and upper (plain) posterior
probabilities vs. g(x), for g defined in (8).
freedom to define the slack functions ?? and ?+ . The identification problem is now
?
g(x) + ?+ (x) = exp ?(c0 + c1 [1 ? (f (x) + b)]+ )
?
?
?
? 1 ? g(x) ? ?? (x) = exp ?(c0 + c1 [1 + (f (x) + b)]+ )
s.t. 0 ? g(x) + ?? (x) ? 1
.
?
+
?
0
?
g(x)
+
?
(x)
?
1
?
?
?? (x) ? ?+ (x)
(7)
Provided c0 = 0 and 0 < c1 ? log(2), there are functions g, ?? and ?+ such that the
above problem has a solution. Hence, we obtain a set of probabilistic interpretations fully
compatible with SVM scores. The solutions indexed by c1 are nested, in the sense that, for
any x, the length of the uncertainty interval, ?+ (x)??? (x), is monotonically decreasing in
c1 : the interpretation of SVM scores as posterior probabilities gets tighter as c1 increases.
The most restricted subset of admissible interpretations, with the shortest uncertainty intervals, obtained for c1 = log(2), is represented in the left-hand side of Figure 3. The loss
incurred by a positive example is represented on the central graph, where the gray zone represents the neg-log-likelihood of all admissible solutions of g(x). Note that the hinge loss
is proportional to the neg-log-likelihood of the upper posterior probability g(x) + ?+ (x),
which is the loss for positive examples in the semi-parametric model in (4). Conversely, the
hinge loss for negative examples is reached for g(x) + ?? (x). An important observation,
that will be useful in Section 4.2 is that the neg-log-likelihood of any admissible functions
g(x) is tangent to the hinge loss at f (x) + b = 0.
The solution is unique in terms of the admissible interval [g + ?? , g + ?+ ], but many
definitions of (?? , ?+ , g) solve (7). For example, g may be defined as
2?[1?(f (x)+b)]+
,
(8)
2?[1+(f (x)+b)]+ + 2?[1?(f (x)+b)]+
which is essentially the posterior probability model proposed by Sollich (2000), represented
dotted in the first two graphs of Figure 3.
g(x; ?) =
The last graph of Figure 3 displays the mapping from g(x) to admissible values of p(1|x)
which results from the choice described in (8). Although the interpretation of SVM scores
does not require to specify g, it may worth to list some features common to all options.
First, g(x) + ?? (x) = 0 for all g(x) below some threshold g0 > 0, and conversely, g(x) +
?+ (x) = 1 for all g(x) above some threshold g1 < 1. These two features are responsible
for the sparsity of the SVM solution. Second, the estimation of posterior probabilities is
accurate at 0.5, and the length of the uncertainty interval on p(1|x) monotonically increases
in [g0 , 0.5] and then monotonically decreases in [0.5, g1 ]. Hence, the training objective of
SVMs is intermediate between the accurate estimation of posterior probabilities on the
whole range [0, 1] and the minimization of the classification risk.
4
Outcomes of the Probabilistic Interpretation
This section gives two consequences of our probabilistic interpretation of SVMs. Further
outcomes, still reserved for future research are listed in Section 6.
4.1
Pointwise Posterior Probabilities from SVM Scores
Platt (2000) proposed to estimate posterior probabilities from SVM scores by fitting a logistic function over the SVM scores. The only logistic function compatible with the most
stringent interpretation of SVMs in the semi-parametric framework,
1
g(x; ?) =
,
(9)
1 + 4?(f (x)+b))
is identical to the model of Sollich (2000) (8) when f (x) + b lies in the interval [?1, 1].
Other logistic functions are compatible with the looser interpretations obtained by letting
c1 < log(2), but their use as pointwise estimates is questionable, since the associated
confidence interval is wider. In particular, the looser interpretations do not ensure that
f (x) + b = 0 corresponds to g(x) = 0.5. Then, the decision function based on the
estimated posterior probabilities by g(x) may differ from the SVM decision function.
Being based on an arbitrary choice of g(x), pointwise estimates of posterior probabilities
derived from SVM scores should be handled with caution. As discussed by Zhang (2004),
they may only be consistent at f (x) + b = 0, where they may converge towards 0.5.
4.2
Unbalanced Classification Losses
SVMs are known to perform well regarding misclassification error, but they provide skewed
decision boundaries for unbalanced classification losses, where the losses associated with
incorrect decisions differ according to the true label. The mainstream approach used to
address this problem consists in using different losses for positive and negative examples
(Morik et al. 1999, Veropoulos et al. 1999), i.e.
X
X
1
[1 ? (f (xi ) + b)]+ +C ?
[1 + (f (xi ) + b)]+ , (10)
min kf k2H +C +
f,b 2
{i|yi =1}
{i|yi =?1}
where the coefficients C + and C ? are constants, whose ratio is equal to the ratio of the
losses ?FN and ?FP pertaining to false negatives and false positives, respectively (Lin et al.
2002).2 Bayes? decision theory defines the optimal decision rule by positive classification
FP
when P (y = 1|x) > P0 , where P0 = ?FP?+?
. We may thus rewrite C + = C ? (1 ? P0 )
FN
?
and C = C ? P0 . With such definitions, the optimization problem may be interpreted
as an upper-bound on the classification risk defined from ?FN and ?FP . However, the machinery of Section 3.2 unveils a major problem: the SVM decision function provided by
sign(f (xi ) + b) is not consistent with the probabilistic interpretation of SVM scores.
We address this problem by deriving another criterion, by requiring that the neg-loglikelihood of any admissible functions g(x) is tangent to the hinge loss at f (x) + b = 0.
This leads to the following problem:
?
X
1
[? log(P0 ) ? (1 ? P0 )(f (xi ) + b)]+ +
min kf k2H + C ?
f,b 2
{i|yi =1}
?
X
[? log(1 ? P0 ) + P0 (f (xi ) + b)]+ ? .
(11)
{i|yi =?1}
2
False negatives/positives respectively designate positive/negative examples incorrectly classified.
5
4
0.6
0.4
0.2
0
?10
1
3
p(1|x)
L(g(x),1)
p(1|x)
1
0.8
2
0.5
1
0
10
f(x)+b
20
0
?10
0
10
f(x)+b
20
0
0
0.25
0.5
0.75
1
g(x)
Figure 4: Left: lower (dashed) and upper (plain) posterior probabilities [g(x) +
?? (x), g(x) + ?+ (x)] vs. SVM scores f (x) + b obtained from (11) with P0 = 0.25; center:
corresponding neg-log-likelihood of g(x) for positive examples vs. f (x) + b. right: lower
(dashed) and upper (plain) posterior probabilities vs. g(x), for g defined by ?+ (x) = 0 for
f (x) + b ? 0 and ?? (x) = 0 for f (x) + b ? 0.
This loss differs from (10), in the respect that the margin for positive examples is smaller
than the one for negative examples when P0 < 0.5. In particular, (10) does not affect
the SVM solution for separable problems, while in (11), the decision boundary moves
towards positive support vectors when P0 decreases. The analogue of Figure 3, displayed
on Figure 4, shows that one recovers the characteristics of the standard SVM loss, except
that the focus is now on the posterior probability P0 defined by Bayes? decision rule.
5
Experiments with Unbalanced Classifications Losses
It is straightforward to implement (11) in standard SVM packages. For experimenting with
difficult unbalanced two-class problems, we used the Forest database, the largest available
UCI dataset (http://kdd.ics.uci.edu/databases/covertype/). We consider the subproblem of discriminating the positive class Krummholz (20510 examples)
against the negative class Spruce/Fir (211840 examples). The ratio of negative to positive
examples is high, a feature commonly encountered with unbalanced classification losses.
The training set was built by random selection of size 11 000 (1000 and 10 000 examples
from the positive and negative class respectively); a validation set, of size 11 000 was drawn
identically among the other examples; finally, the test set, of size 99 000, was drawn among
the remaining examples.
The performance was measured by the weighted risk function R = n1 (NFN ?FN +NFP ?FP ),
where NFN and NFP are the number of false negatives and false positives, respectively. The
loss ?FP was set to one, and ?FN was successively set to 1, 10 and 100, in order to penalize
more and more heavily errors from the under-represented class.
All approaches were tested using SVMs with a Gaussian kernel on normalized data. The
hyper-parameters were tuned on the validation set for each of the ?FN values. We additionally considered three tuning for the bias b: ?b is the bias returned by the algorithm; ?bv the
bias returned by minimizing R on the validation set, which is an optimistic estimate of the
bias that could be computed by cross-validation. We also provide results for b? , the optimal
bias computed on the test set. This ?crystal ball? tuning may not represent an achievable
goal, but it shows how far we are from the optimum. Table 1 compares the risk R obtained
with the three approaches for the different values of ?FN .
The first line, with ?FN = 1 corresponds to the standard classification error, where all
training criteria are equivalent in theory and in practice. The bias returned by the algorithm
is very close to the optimal one. For ?FN = 10 and ?FN = 100, the models obtained
by optimizing C + /C ? (10) and P0 (11) achieve better results than the baseline with the
crystal ball bias. While the solutions returned by C + /C ? can be significantly improved
Table 1: Errors for 3 different criteria and for 3 different models over the Forest database
?FN
1
10
100
Baseline, problem (5)
?b
b?
0.027
0.026
0.167
0.108
1.664
0.406
C + /C ? , problem (10)
?b
?bv
b?
0.027 0.027 0.026
0.105 0.104 0.094
0.403 0.291 0.289
P0 , problem (11)
?b
?bv
b?
0.027 0.027 0.026
0.095 0.104 0.094
0.295 0.291 0.289
by tuning the bias, our criterion provides results that are very close to the optimum, in the
range of the performances obtained with the bias optimized on an independant validation
set. The new optimization criterion can thus outperform standard approaches for highly
unbalanced problems.
6
Conclusion
This paper introduced a semi-parametric model for classification which provides an interesting viewpoint on SVMs. The non-parametric component provides an intuitive means of
transforming the likelihood into a decision-oriented criterion. This framework was used
here to propose a new parameterization of the hinge loss, dedicated to unbalanced classification problems, yielding significant improvements over the classical procedure.
Among other prospectives, we plan to apply the same framework to investigate hinge-like
criteria for decision rules including a reject option, where the classifier abstains when a
pattern is ambiguous. We also aim at defining losses encouraging sparsity in probabilistic
models, such as kernelized logistic regression. We could thus build sparse probabilistic
classifiers, providing an accurate estimation of posterior probabilities on a (limited) predefined range of posterior probabilities. In particular, we could derive decision-oriented
criteria for multi-class probabilistic classifiers. For example, minimizing classification error only requires to find the class with highest posterior probability, and this search does
not require precise estimates of probabilities outside the interval [1/K, 1/2], where K is
the number of classes.
References
Y. Lin, Y. Lee, and G. Wahba. Support vector machines for classification in non-standard situations.
Machine Learning, 46:191?202, 2002.
B. G. Lindsay. Nuisance parameters. In S. Kotz, C. B. Read, and D. L. Banks, editors, Encyclopedia
of Statistical Sciences, volume 6. Wiley, 1985.
G. J. McLachlan. Discriminant analysis and statistical pattern recognition. Wiley, 1992.
K. Morik, P. Brockhausen, and T. Joachims. Combining statistical learning with a knowledge-based
approach - a case study in intensive care monitoring. In Proceedings of ICML, 1999.
D. Oakes. Semi-parametric models. In S. Kotz, C. B. Read, and D. L. Banks, editors, Encyclopedia
of Statistical Sciences, volume 8. Wiley, 1988.
J. C. Platt. Probabilities for SV machines. In A. J. Smola, P. L. Bartlett, B. Sch?olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 61?74. MIT Press, 2000.
P. Sollich. Probabilistic methods for support vector machines. In S. A. Solla, T. K. Leen, and K.-R.
M?uller, editors, Advances in Neural Information Processing Systems 12, pages 349?355, 2000.
K. Veropoulos, C. Campbell, and N. Cristianini. Controlling the sensitivity of support vector machines. In T. Dean, editor, Proc. of the IJCAI, pages 55?60, 1999.
T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, 32(1):56?85, 2004.
| 2763 |@word version:1 briefly:2 achievable:1 norm:1 c0:7 decomposition:1 covariance:1 p0:14 independant:1 score:17 tuned:1 recovered:1 mari:1 fn:11 kdd:1 enables:2 designed:1 v:9 alone:1 generative:1 selected:1 parameterization:1 ith:1 egne:1 provides:5 zhang:2 incorrect:1 consists:1 fitting:2 excellence:1 indeed:1 behavior:1 multi:1 utc:2 decomposed:2 decreasing:1 resolve:1 encouraging:1 provided:4 estimating:5 what:2 interpreted:4 caution:1 whatsoever:1 ti:13 concave:1 questionable:1 classifier:4 platt:3 positive:17 consequence:2 therein:1 weakened:1 conversely:2 limited:1 range:6 unique:1 responsible:1 practice:1 implement:2 differs:1 procedure:3 empirical:1 significantly:1 reject:1 imprecise:1 confidence:2 get:1 close:2 selection:1 risk:5 equivalent:2 dean:1 map:3 lagrangian:1 center:2 maximizing:3 pursues:1 go:1 straightforward:1 independently:1 convex:3 focused:1 nfn:2 rule:5 deriving:1 handle:1 annals:1 controlling:1 lindsay:3 heavily:1 user:1 samy:1 recognition:1 asymmetric:1 veropoulos:2 database:3 subproblem:1 solla:1 decrease:2 highest:1 transforming:1 convexity:2 cristianini:1 unveils:1 rewrite:1 joint:2 represented:4 pertaining:1 hyper:1 neighborhood:1 outcome:3 outside:1 whose:1 posed:2 valued:2 solve:1 loglikelihood:1 statistic:1 g1:2 propose:1 fr:1 grandval:1 relevant:2 uci:2 combining:1 achieve:1 intuitive:1 kh:1 olkopf:1 ijcai:1 requirement:1 optimum:2 wider:1 derive:3 measured:1 idiap:2 implies:1 differ:2 switzerland:1 concentrate:1 abstains:1 stringent:1 explains:1 require:3 tighter:1 designate:1 around:1 considered:4 ic:1 exp:4 k2h:3 mapping:6 major:1 purpose:2 estimation:13 proc:1 label:1 largest:1 reflects:1 weighted:1 mclachlan:2 minimization:2 mit:1 uller:1 always:1 gaussian:5 aim:1 publication:1 heudiasyc:1 focus:2 derived:1 joachim:1 improvement:2 modelling:1 likelihood:25 experimenting:1 baseline:3 sense:1 posteriori:2 cnrs:1 kernelized:1 france:1 classification:22 ill:2 pascal:1 among:3 proposes:1 plan:1 art:1 integration:1 constrained:1 marginal:4 equal:1 having:1 identical:1 represents:1 look:1 icml:1 future:1 np:1 oriented:2 n1:1 attempt:1 freedom:2 detection:1 interest:1 highly:1 investigate:1 saturated:1 extreme:2 yielding:1 primal:1 predefined:1 accurate:3 machinery:1 indexed:1 fitted:1 modeling:4 subset:1 too:1 sv:1 thanks:1 density:1 sensitivity:1 discriminating:1 probabilistic:12 lee:1 again:1 central:1 tube:3 successively:1 oakes:2 fir:1 coefficient:1 depends:1 piece:1 view:2 optimistic:2 reached:1 bayes:2 option:2 minimize:1 yves:1 ni:1 reserved:1 characteristic:1 identify:2 identification:1 monitoring:1 worth:1 classified:1 definition:3 failure:1 against:1 associated:3 recovers:2 dataset:1 knowledge:2 organized:1 hilbert:1 campbell:1 response:2 specify:1 improved:1 formulation:4 evaluated:1 leen:1 smola:1 hand:4 lack:1 defines:1 logistic:4 gray:2 requiring:1 true:1 normalized:2 regularization:1 hence:2 reformulating:1 read:3 skewed:1 nuisance:9 ambiguous:1 criterion:8 crystal:2 complete:3 dedicated:1 purposely:1 common:2 functional:1 empirically:1 modelization:1 volume:2 discussed:1 interpretation:15 relating:1 doomed:1 significant:1 tuning:3 consistency:3 pointed:1 similarity:1 mainstream:1 etc:1 posterior:29 optimizing:1 binary:2 arbitrarily:1 yi:9 neg:10 additional:1 relaxed:5 care:1 converge:2 maximize:1 shortest:1 monotonically:3 dashed:5 semi:20 reduces:1 adapt:1 offer:1 cross:1 lin:2 prediction:1 regression:1 essentially:1 represent:5 kernel:3 normalization:1 c1:12 proposal:1 penalize:1 interval:11 addressed:1 sch:1 unlike:1 cedex:1 subject:1 call:1 intermediate:1 bengio:2 identically:1 affect:3 perfectly:1 restrict:1 wahba:1 reduce:1 regarding:3 knowing:1 intensive:1 optimism:1 allocate:1 handled:1 bartlett:1 effort:1 returned:5 interpolates:1 proceed:1 generally:1 useful:1 listed:1 amount:1 encyclopedia:2 svms:13 http:1 outperform:1 shifted:1 dotted:2 sign:2 estimated:4 diagnosis:1 ist:2 threshold:2 drawn:2 graph:5 package:1 uncertainty:3 place:1 kotz:2 intruder:1 looser:4 decision:18 bound:2 display:2 encountered:1 adapted:1 occur:1 covertype:1 constraint:3 precisely:1 infinity:1 bv:3 argument:1 min:6 separable:1 according:2 ball:2 smaller:1 sollich:7 restricted:1 ln:2 remains:1 slack:2 letting:1 available:1 apply:1 original:1 remaining:1 ensure:2 hinge:9 build:1 classical:1 objective:5 g0:2 already:1 move:1 parametric:31 primary:1 capacity:1 discriminant:2 trivial:1 assuming:1 length:2 pointwise:3 morik:2 reformulate:1 providing:3 minimizing:3 ratio:3 difficult:1 negative:11 martigny:1 proper:2 perform:1 upper:7 observation:1 displayed:4 incorrectly:1 situation:3 defining:1 looking:1 precise:1 reproducing:2 arbitrary:2 community:1 introduced:2 connection:2 optimized:1 unequal:1 address:5 suggested:1 below:2 pattern:3 usually:1 fp:6 sparsity:2 built:1 including:2 max:1 analogue:1 misclassification:1 demanding:1 difficulty:1 representing:1 concludes:1 naive:1 prior:1 tangent:2 kf:3 loss:30 fully:1 interesting:1 proportional:2 penalization:1 ethoz:1 validation:5 incurred:1 degree:1 affine:2 consistent:3 viewpoint:1 bank:2 grandvalet:1 editor:5 compatible:4 course:1 supported:1 last:3 qualified:2 side:4 bias:9 institute:1 nfp:2 taking:1 sparse:1 boundary:3 plain:6 author:2 made:1 commonly:1 avoided:1 programme:1 far:1 dealing:1 monotonicity:1 symmetrical:1 xi:24 discriminative:1 spectrum:1 search:1 table:2 additionally:1 forest:2 schuurmans:1 european:1 domain:1 whole:3 wiley:3 precision:1 wish:1 lie:3 admissible:10 list:1 svm:28 concern:2 closeness:1 intractable:1 exists:1 false:5 compi:1 margin:2 depicted:1 simply:1 explore:1 expressed:1 binding:1 ch:1 nested:1 corresponds:2 brockhausen:1 conditional:3 goal:2 towards:3 hard:1 except:1 zone:2 support:5 unbalanced:11 tested:2 |
1,942 | 2,764 | The Information-Form Data Association Filter
Brad Schumitsch, Sebastian Thrun, Gary Bradski, and Kunle Olukotun
Stanford AI Lab
Stanford University, Stanford, CA 94305
Abstract
This paper presents a new filter for online data association problems in
high-dimensional spaces. The key innovation is a representation of the
data association posterior in information form, in which the ?proximity? of objects and tracks are expressed by numerical links. Updating
these links requires linear time, compared to exponential time required
for computing the exact posterior probabilities. The paper derives the
algorithm formally and provides comparative results using data obtained
by a real-world camera array and by a large-scale sensor network simulation.
1 Introduction
This paper addresses the problem of data association in online object tracking [6]. The data
association problem arises in a large number of application domains, including computer
vision, robotics, and sensor networks.
Our setup assumes an online tracking system that receives two types of data: sensor
data, conveying information about the identity or type of objects that are being tracked; and
transition data, characterizing the uncertainty introduced through the tracker?s inability to
reliably track individual objects over time. The setup is motivated by a camera network
which we recently deployed in our lab. Here sensor data relates to the color of clothing of
individual people, which enables us to identify them. Tracks are lost when people walk too
closely together, or when they occlude each other.
We show that the standard probabilistic solution to the discrete data association problem requires exponential update time and exponential memory. This is because each data
association hypothesis is expressed by a permutation matrix that assigns computer-internal
tracks to objects in the physical world. An optimal filter would therefore need to maintain
a probability distribution over the space of all permutation matrices, which grows exponentially with N , the number of objects in the world. The common remedy involves the
selection of a small number K of likely hypotheses. This is the core of numerous widelyused multi-hypothesis tracking algorithms [9, 1]. More recent solutions involve particle
filters [3], which maintain stochastic samples of hypotheses. Both of these techniques are
very effective for small N, but the number of hypothesis they require grows exponentially
with N .
This paper provides a filter algorithm that scales to much larger problems. This filter
maintains an information matrix ? of size N ? N , which relates tracks to physical objects
in the world. The rows of ? correspond to object identities, the columns to the tracks of the
tracker. ? is a matrix in information form, that is, it can be thought of as a non-normalized
log-probability.
Fig. 1a shows an example. The highlighted first column corresponds to track 1 in
the tracker. The numerical values in this column suggest that this track is most strongly
(a) Example: Information matrix
? 2
1
?=?
10
5
12
2
4
2
4
11
4
1
(b) Most likely data association
4
0 ?
15
2
? 0
? = argmax tr AT ? = ? 0
A
?
(c) Update: Associating track 2 with object 4
?
?
2 12 4 4
1 2 11 0
10 4 4 15
5
2
1
2
?
?
? ?? ?
2 12 4 4
1 2 11 0
10 4 4 15
5
3
1
0
1
A
2
?
?
1
0
0
0
0
1
0
0
0
0 ?
1
0
?
(d) Update: Tracks 2 and 3 merge
?
2 12
? 1
? 10
5
4
4
?
?
2 11.31
? ?? ?
? 1
?
? 10
15
2
11 0
4
4
3
1
2
5
10.31
11.31 4
10.31 0
4
4
2.43
2.43
?
?
?
15 ?
2
(e) Graphical network interpretation of the information form
Figure 1: Illustration of the information form filter for data association in object tracking
associated with object 3, since the value 10 dominates all other values in this column.
Thus, looking at column 1 of ? in isolation would have us conclude that the most likely
association of track 1 is object 3. However, the most likely permutation matrix is shown
in Fig. 1b; from all possible data association assignments, this matrix receives the highest
score. Its score is tr A?T ? = 5 + 12 + 11 + 15 = 43 (here ?tr? denotes the trace of a
matrix). This permutation matrix associates object 3 with track 4, while associating track
1 with object 4.
The key question now pertains to the construction of ?. As we shall see, the update
operations for ? are simple and parallelizable. Suppose we receive a measurement that
associates track 2 with object 4 (e.g., track 2?s hair color appears to be the same as person
4?s hair color in our camera array). As a result, our approach adds a value to the element in
? that links object 4 and track 2, as illustrated in Fig. 1c (the exact magnitude of this value
will be discussed below). Similarly, suppose our tracker is unable to distinguish between
objects 2 and 3, perhaps because these objects are so close together in a camera image that
they cannot be tracked individually. Such a situation leads to a new information matrix, in
which both columns assume the same values, as illustrated in Fig. 1d. The exact values in
this new information matrix are the result of an exponentiated averaging explained below.
All of these updates are easily parallelized, and hence are applicable to a decentralized
network of cameras. The exact update and inference rules are based on a probabilistic
model that is also discussed below.
Given the importance of data association, it comes as no surprise that our algorithm is
related to a rich body of prior work. The data association problem has been studied as an
offline problem, in which all data is memorized and inference takes place after data collection. There exists a wealth of powerful methods, such as RANSAC [4] and MCMC [6, 2],
but those are inherently offline and their memory requirements increase over time. The
dominant online, or filter, paradigm involves the selection of K representative samples
of the data association matrix, but such algorithms tend to work only for small N [11].
Relatively little work has focused on the development of compact sufficient statistics for
data association. One alternative O(N 2 ) technique to the one proposed here was explored
in [8]. This technique uses doubly stochastic matrices, which are computationally hard to
maintain. The first mention of information filters is in [8], but the update rules there were
computationally less efficient (in O(N 4 )) and required central optimization.
The work in this paper does not address the continuous-valued aspects of object tracking. Those are very well understood, and information representations have been successfully applied [5, 10].
Information representations are popular in the field of graphical networks. Our approach can be viewed as a learning algorithm for a Markov network [7] of a special topology, where any track and any object are connected by an edge. Such a network is shown in
Fig. 1e. The filter update equations manipulate the strength of the edges based on data.
2 Problem Setup and Bayes Filter Solution
We begin with a formal definition of the data association problem and derive the obvious
but inefficient Bayes filter solution. Throughout this paper, we make the closed world
assumption, that is, there are always the same N known objects in the world.
2.1 Data Association
We assume that we are given a tracking algorithm that maintains N internal tracks of the
moving objects. Due to insufficient information, this assumed tracking algorithm does not
always know the exact mapping of identities to internal tracks. Hence, the same internal
track may correspond to different identities at different times.
The data association problem is the problem of assigning these N tracks to N objects.
Each data association hypothesis is characterized by a permutation matrix of the type shown
in Fig. 1b. The columns of this matrix correspond to the internal tracks, and the rows to
the objects. We will denote the data association matrix by A (not to be confused with the
information matrix ?). In our closed world, A is always a permutation matrix; hence all
elements are 0 or 1. There are exponentially many permutation matrices, which is a reason
why data association is considered a hard problem.
2.2 Identity Measurement
The correct data association matrix A is unobservable. Instead, the sensors produce local
information about the relation of individual tracks to individual objects. We will denote
sensor measurements by zj , where j is the index of the corresponding track. Each zj =
{zij } specifies a local probability distribution in the corresponding object space:
X
p(xi = yj | zj ) = zij
with
zij = 1
(1)
i
Here xi is the i-th object in the world, and yj is the j-th track.
The measurement in our introductory example (see Fig. 1c) was of a special form, in
that it elevated one specific correspondence over the others. This occurs when zij = ? for
1??
some ? ? 1, and zkj = N
?1 for all k 6= i. Such a measurement arises when the tracker
receives evidence that a specific track yj corresponds with high likelihood to a specific
object xi . Specifically, the measurement likelihood of this correspondence is ?, and the
error probability is 1 ? ?.
2.3 State Transitions
As time passes by, our tracker may confuse tracks, which is a loss of information with
respect to the data association. The tracker confusing two objects amounts to a random flip
of two columns in the data association matrix A.
The model adopted in this paper generalizes this example to arbitrary distributions over
permutations of the columns
P in A. Let {B1 , . . . , BM } be a set of permutation matrices,
and {?1 , . . . , ?M } with m ?m = 1 be a set of associated probabilities. The ?true? permutation matrix undergoes a random transition from A to A Bm with probability ?m :
A
prob=?m
??
A Bm
(2)
The sets {B1 , . . . , BM } and {?1 , . . . , ?M } are given to us by the tracker. For the example
in Fig. 1d, in which tracks 2 and 3 merge, the following two permutation matrices will
implement such a merge:
? 1 0 0 0 ?
? 1 0 0 0 ?
0 1
B1 = ? 0 0
0
0
0
1
0
0 ?
; ?1 = 0.5
0
1
0 0
B2 = ? 0 1
0
0
1
0
0
0 ?
; ?2 = 0.5
0
1
(3)
The first such matrix leaves the association unchanged, whereas the second swaps columns
2 and 3. Since ?1 = ?2 = 0.5, such a swap happens exactly with probability 0.5.
2.4 Inefficient Bayesian Solution
For small N , the data association problem now has an obvious Bayes filter solution. Specifically, let A be the space of all permutation matrices. The Bayesian filter solves the identity
tracking problem by maintaining a probabilistic belief over the space of all permutation
matrices A ? A. For each A, it maintains a posterior probability denoted p(A). This probability is updated in two different ways, reminiscent of the measurement and state transition
updates in DBNs and EKFs.
The measurement step updates the belief in response to a measurement zj . This update
is an application of Bayes rule:
X
1
p(A)
aij zij
(4)
p(A) ??
L
i
X
X
?
with L =
p(A)
a
?ij zij
(5)
?
A
i
Here aij denotes the ij-th element of the matrix A. Because A is a permutation matrix,
only one element in the sum over i is non-zero (hence there is not really a summation here).
The state transition updates the belief in accordance with the permutation matrices Bm
and associated probabilities ?m (see Eq. 2):
X
T
p(A) ??
?m p(A Bm
)
(6)
m
We use here that the inverse of a permutation matrix is its transpose.
This Bayesian filter is an exact solution to our identity tracking problem. Its problem is
complexity: there are N ! permutation matrices A, and we have to compute probabilities for
all of them. Thus, the exact filter is only applicable to problems with small N . Even if we
want to keep track of K ? N likely permutations?as attempted by filters like the multihypothesis EKF or the particle filter?the required number of tracks K will generally have
to scale exponentially with N (albeit at a slower rate). This exponential scaling renders the
Bayesian filter ultimately inapplicable to the identity tracking problem with large N .
3 The Information-Form Solution
Our data association filter represents the posterior in condensed form, using an N ? N information matrix. As a result, it requires linear update time and quadratic memory, instead
of the exponential time and memory requirements of the Bayes filter.
However, we give two caveats regarding our method: it is approximate, and it does not
maintain probabilities. The approximation is the result of a Jensen approximation, which
we will show is empirically accurate. The calculation of probabilities from an information
matrix requires inference, and we will provide several options for performing this inference.
3.1 The Information Matrix
The information matrix, denoted ?, is a matrix of size N ? N whose elements are nonnegative. ? induces a probability distribution over the space of all data association matrices
A, through the following definition:
1
p(A) =
exp tr A ?
Z
with
Z =
X
exp tr A ?
(7)
A
Here tr is the trace of a matrix, and Z is the partition function.
Computing the posterior probability p(A) from ? is hard, due to the difficulty of computing the partition function Z. However, as we shall see, maintaining ? is surprisingly
easy, and it is also computationally efficient.
3.2 Measurement Update in Information Form
In information form, the measurement update is a local addition of the form:
?
?
0???0
?
??
? + ? ... . . . ...
0???0
log z1j
..
.
log z1N
0???0
.. . . .. ?
. ..
0???0
(8)
This follows directly from Eq. 4. The complexity of this update is O(N ).
Of particular interest is the case where one specific association was affirmed with prob1??
ability zij = ?, while all others were true with the error probability zkj = N
?1 . Then the
update is of the form
?
?
?
??
?
?
?
?
?
?+?
?
?
?
?
0???0
.. . . ..
. ..
0???0
.. . . ..
. ..
0???0
.. . . ..
. ..
0???0
c
..
.
c
log ?
c
..
.
c
0???0
.. . . ..
. ..
0???0
.. . . ..
. ..
0???0
.. . . ..
. ..
0???0
?
?
?
?
?
?
?
?
?
?
with
c = log
1??
N ?1
(9)
However, since ? is a non-normalized matrix (it is normalized via the partition function Z
in Eq. 7), we can modify ? as long as exp tr A ? is changed by the same factor for any
A. In particular, we can subtract c from an entire column in ?; this will affect the result of
exp tr A ? by a factor of exp c, which is independent of A and hence will be subsumed by
the normalizer Z. This allows us to perform a more efficient update
1??
?ij ?? ?ij + log ? ? log
(10)
N ?1
where ?ij is the ij-th element of ?. This update is indeed of the form shown in Fig. 1c. It
requires O(1) time, is entirely local, and is an exact realization of Bayes rule in information
form.
3.3 State Transition Update in Information Form
The state transition update is also simple, but it is approximate. We show that using a
Jensen bound, we obtain the following update for the information matrix:
X
T
? ?? log
?m Bm
exp ?
(11)
m
Here the expression ?exp ?? denotes a component-wise exponentiation of the matrix ?;
the result is also a matrix. This update implements a ?dual? of a geometric mean; here
the exponentiation is applied to the individual elements of this mean, and the logarithm is
applied to the result. It is important to notice that this update only affects elements in ?
that might be affected by a permutation Bm ; all others remain the same.
A numerical example of this update was given in Fig. 1d, assuming the permutation
matrices in Eq. 3. The values there are the result of applying this update formula. For
example, for the first row we get log 21 (exp 12 + exp 4) = 11.3072.
The derivation of this update formula is straightforward. We begin with Eq. 6, written in logarithmic form. The transformations rely heavily on the fact that A and Bm are
permutation matrices. We use the symbol ?tr? ? for a multiplicative version of the matrix
trace, in which all elements on the diagonal are multiplied.
X
T
log p(A) ?? log
?m p(A Bm
)
m
=
const. + log
X
T
?m exp tr A Bm
?
m
=
const. + log
X
T
?m tr? exp A Bm
?
m
X
T
?m tr? A Bm
exp ?
=
const. + log
?
const. + log tr? A
m
X
T
?m Bm
exp ?
X
T
?m Bm
m
=
"
const. + tr A log
m
exp ?
#
(12)
The result is of the form of (the logarithm of) Eq. 7. The expression in brackets is equivalent
to the right-hand side of the update Eq. 11. A benefit of this update rule is that it only affects
columns in ? that are affected by a permutation Bm ; all other columns are unchanged.
We note that the approximation in this derivation is the result of applying a Jensen
bound. As a result, we gain a compact closed-form solution to the update problem, but the
state transition step may sacrifice information in doing so (as indicated by the ??? sign).
In our experimental results section, however, we find that this approximation is extremely
accurate in practice.
4 Computing the Data Association
The previous section formally derived our update rules, which are simple and local. We
now address the problem of recovering actual data association hypotheses from the information matrix, along with the associated probabilities.
We consider three cases: the computation of the most likely data association matrix as
illustrated in Fig. 1b; the computation of a relative probability of the form p(A)/p(A? ); and
the computation of an absolute probability or expectation.
To recover argmaxA p(A), we need only solve a linear program.
Relative probabilities are also easy to recover. Consider, for example, the quotient of
the probability p(A)/p(A? ) for two identity matrices A and A? . When calculating this
quotient from Eq. 7, the normalizer Z cancels out:
p(A)
= exp tr(A ? A? ) ?
(13)
p(A? )
Absolute probabilities and expectations are generally the most difficult to compute.
This is because of the partition function Z in Eq. 7, whose exact calculation requires considering N ! permutation matrices.
Our approximate method for recovering probabilities/expectations is based on the
Metropolis algorithm. Specifically, consider the expectation of a function f :
X
E[f (A)] =
f (A) p(A)
(14)
A
Our method approximates this expression through a finite sample of matrices A[1] , A[2] , . . .,
using Metropolis and the proposal distribution defined in Eq. 13. This proposal generates
excellent results for simple functions f (e.g., the marginal of a single identity). For more
(a) camera
(b) array of 16 ceiling-mounted cameras
(c) camera images
(d) 2 of the tracks
Figure 2: The camera array, part of the common area in the Stanford AI Lab. Panel (d) compares
our esitmate with ground truth for two of the tracks. The data association is essentially correct at all
times.
(a) Comparison K-hypothesis vs.
information-theoretic tracker
(b) Comparison using a DARPA challenge
data set produced by Northrop Grumman
K-hypotheses
our approach
@
I
@
our approach
Figure 3: Results for our approach information-form filter the common multi-hypothesis approach
for (a) synthetic data and (b) a DARPA challenge data set. The comparison (b) involves additional
algorithms, including one published in [8].
complex functions f , we refer the reader to improved proposal distributions that have been
found to be highly efficient in related problems [6, 2].
5 Experimental Results
To evaluate this algorithm, we deployed a network of ceiling-mounted cameras in our lab,
shown in Fig. 2. We used 16 cameras to track individuals walking through the lab. The
tracker uses background subtraction to find blobs and uses a color histogram to classify
these blobs. Only when two or more people come very close to each other might the
tracker lose track of individual people. We find that for N = 5 our method tracks people
nearly perfectly, but so does the full-blown Bayesian solution, as well as the K-best multihypothesis method that is popular in the tracking literature.
To investigate scaling to larger N , we compared our approach on two data sets: a synthetic one with up to N = 1, 600 objects, and a dataset using an sensor network simulation
provided to us by Northrop Grumman through an ongoing DARPA program. The latter
set is thought to be realistic. It was chosen because it involves a large number (N = 200)
of moving objects, whose motion patterns come from a behavioral model. In all cases,
we measured the number of objects mislabeled in the maximum likelihood hypothesis (as
found by solving the LP). All results are averaged over 50 runs.
The comparison in Fig. 3a shows that our approach outperforms the traditional K-best
hypothesis approach (with K = N ) by a large margin. Furthermore, our approach seems
to be unaffected by N , the number of entities in the environment, whereas the traditional
approach deteriorates. This comes as no surprise, since the traditional approach requires
increasing numbers of samples to cover the space of all data associations. The results in
Fig. 3b compare (from left to right), the most likely hypothesis, the most recent sensor
measurement, the K-best approach with K = 200, an approach proposed in [8], and our
approach. Notice that this plot is in log-form.
No comparisons were attempted with offline techniques, such as the ones in [4, 6],
because the data sets used here are quite large and our interest is online filtering.
6 Conclusion
We have provided an information form algorithm for the data association problem in object
tracking. The key idea of this approach is to maintain a cumulative matrix of information
associating computer-internal tracks with physical objects. Updating this matrix is easy;
furthermore, efficient methods were proposed for extracting concrete data association hypotheses from this representation. Empirical work using physical networks of camera arrays illustrated that our approach outperforms alternative paradigms that are commonly
used throughout all of science.
Despite these advances, the work possesses a number of limitations. Specifically, our
closed world assumption is problematic, although we believe the extension to open worlds
is relatively straightforward. Also missing is a tight integration of our discrete formulation into continuous-valued traditional tracking algorithms such as EKFs. Such extensions
warrant further research.
We believe the key innovation here is best understood from a graphical model perspective. Sampling K good data associations cannot exploit conditional independence in the
data association posterior, hence will always require that K is an exponential function of
N . The information form and the equivalent graphical network in Fig. 1e exploits conditional independences. This subtle difference makes it possible to get away with O(N 2 )
memory and O(N ) computation without a loss of accuracy when N increases, as shown
in Fig. 3a. The information form discussed here?and the associated graphical networks?
promise to overcome a key brittleness associated with the current state-of-the-art in online
data association.
Acknowledgements
We gratefully thank Jaewon Shin and Leo Guibas for helpful discussions.
This research was sponsored by the Defense Advanced Research Projects Agency
(DARPA) under the ACIP program and grant number NBCH104009.
References
[1] Y. Bar-Shalom and X.-R. Li. Estimation and Tracking: Principles, Techniques, and Software.
YBS, Danvers, MA, 1998.
[2] F. Dellaert, S.M. Seitz, C. Thorpe, and S. Thrun. EM, MCMC, and chain flipping for structure
from motion with unknown correspondence. Machine Learning, 50(1-2):45?71, 2003.
[3] A. Doucet, J.F.G. de Freitas, and N.J. Gordon, editors. Sequential Monte Carlo Methods in
Practice. Springer, 2001.
[4] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting
with applications to image analysis and automated cartography. Communications of the ACM,
24:381?395, 1981.
[5] P. Maybeck. Stochastic Models, Estimation, and Control, Volume 1. Academic Press, 1979.
[6] H. Pasula, S. Russell, M. Ostland, and Y. Ritov. Tracking many objects with many sensors.
IJCAI-99.
[7] J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan
Kaufmann, 1988.
[8] J. Shin, N. Lee, S. Thrun, and L. Guibas. Lazy inference on object identities in wireless sensor
networks. IPSN-05.
[9] D.B. Reid. An algorithm for tracking multiple targets. IEEE Transactions on Aerospace and
Electronic Systems, AC-24:843?854, 1979.
[10] S. Thrun, Y. Liu, D. Koller, A.Y. Ng, Z. Ghahramani, and H. Durrant-Whyte. Simultaneous
localization and mapping with sparse extended information filters. IJRR, 23(7/8), 2004.
[11] D. Fox, J. Hightower, L. Lioa, D. Schulz, and G. Borriello. Bayesian Filtering for Location
Estimation. IEEE Pervasive Computing, 2003.
| 2764 |@word version:1 seems:1 open:1 seitz:1 simulation:2 mention:1 tr:15 liu:1 score:2 zij:7 outperforms:2 freitas:1 current:1 assigning:1 reminiscent:1 written:1 realistic:1 numerical:3 partition:4 enables:1 grumman:2 plot:1 sponsored:1 update:31 occlude:1 v:1 leaf:1 core:1 caveat:1 provides:2 location:1 along:1 zkj:2 doubly:1 introductory:1 fitting:1 behavioral:1 ostland:1 sacrifice:1 indeed:1 multi:2 little:1 actual:1 considering:1 affirmed:1 increasing:1 begin:2 confused:1 provided:2 project:1 panel:1 transformation:1 exactly:1 control:1 grant:1 reid:1 understood:2 local:5 accordance:1 modify:1 despite:1 merge:3 northrop:2 might:2 studied:1 averaged:1 camera:12 yj:3 lost:1 practice:2 implement:2 shin:2 area:1 empirical:1 thought:2 argmaxa:1 suggest:1 get:2 cannot:2 close:2 selection:2 applying:2 equivalent:2 missing:1 straightforward:2 focused:1 assigns:1 rule:6 array:5 brittleness:1 updated:1 construction:1 suppose:2 dbns:1 heavily:1 exact:9 target:1 us:3 hypothesis:14 associate:2 element:9 updating:2 walking:1 connected:1 russell:1 highest:1 environment:1 agency:1 complexity:2 fischler:1 ultimately:1 solving:1 tight:1 inapplicable:1 localization:1 swap:2 mislabeled:1 easily:1 darpa:4 derivation:2 leo:1 effective:1 monte:1 whose:3 quite:1 stanford:4 larger:2 valued:2 solve:1 plausible:1 ability:1 statistic:1 highlighted:1 online:6 blob:2 realization:1 ijcai:1 requirement:2 produce:1 comparative:1 object:36 derive:1 acip:1 ac:1 measured:1 z1n:1 ij:6 eq:10 solves:1 recovering:2 quotient:2 involves:4 come:4 whyte:1 closely:1 correct:2 filter:23 stochastic:3 ipsn:1 memorized:1 require:2 really:1 summation:1 extension:2 clothing:1 proximity:1 tracker:11 considered:1 ground:1 guibas:2 exp:15 mapping:2 estimation:3 applicable:2 condensed:1 lose:1 individually:1 successfully:1 sensor:10 always:4 ekf:1 pervasive:1 derived:1 likelihood:3 cartography:1 normalizer:2 helpful:1 inference:6 entire:1 relation:1 koller:1 schulz:1 unobservable:1 dual:1 denoted:2 development:1 art:1 special:2 integration:1 marginal:1 field:1 ng:1 sampling:1 represents:1 cancel:1 nearly:1 warrant:1 others:3 gordon:1 intelligent:1 thorpe:1 individual:7 argmax:1 maintain:5 subsumed:1 interest:2 bradski:1 highly:1 investigate:1 bracket:1 chain:1 accurate:2 edge:2 fox:1 logarithm:2 walk:1 column:13 classify:1 cover:1 assignment:1 too:1 synthetic:2 person:1 probabilistic:4 lee:1 schumitsch:1 together:2 concrete:1 central:1 inefficient:2 li:1 de:1 b2:1 multiplicative:1 lab:5 closed:4 doing:1 bayes:6 maintains:3 option:1 recover:2 accuracy:1 kaufmann:1 correspond:3 conveying:1 identify:1 bayesian:6 produced:1 carlo:1 published:1 unaffected:1 simultaneous:1 parallelizable:1 sebastian:1 definition:2 obvious:2 associated:6 gain:1 dataset:1 popular:2 color:4 subtle:1 appears:1 response:1 improved:1 yb:1 formulation:1 ritov:1 strongly:1 furthermore:2 pasula:1 hand:1 receives:3 undergoes:1 indicated:1 perhaps:1 believe:2 grows:2 normalized:3 true:2 remedy:1 hence:6 illustrated:4 theoretic:1 bolles:1 motion:2 reasoning:1 image:3 wise:1 recently:1 common:3 physical:4 tracked:2 empirically:1 exponentially:4 volume:1 association:39 interpretation:1 elevated:1 discussed:3 approximates:1 measurement:12 refer:1 ai:2 similarly:1 particle:2 gratefully:1 moving:2 add:1 dominant:1 posterior:6 recent:2 perspective:1 shalom:1 morgan:1 additional:1 parallelized:1 subtraction:1 paradigm:3 relates:2 full:1 multiple:1 characterized:1 calculation:2 academic:1 long:1 manipulate:1 kunle:1 ransac:1 hair:2 vision:1 expectation:4 essentially:1 histogram:1 robotics:1 receive:1 whereas:2 want:1 addition:1 proposal:3 background:1 wealth:1 posse:1 pass:1 tend:1 extracting:1 easy:3 automated:1 affect:3 isolation:1 independence:2 associating:3 topology:1 perfectly:1 regarding:1 idea:1 motivated:1 expression:3 defense:1 render:1 dellaert:1 generally:2 involve:1 amount:1 maybeck:1 induces:1 specifies:1 zj:4 problematic:1 notice:2 blown:1 sign:1 deteriorates:1 track:36 discrete:2 shall:2 promise:1 affected:2 key:5 olukotun:1 sum:1 run:1 prob:1 inverse:1 uncertainty:1 powerful:1 exponentiation:2 place:1 throughout:2 reader:1 electronic:1 z1j:1 confusing:1 scaling:2 entirely:1 bound:2 distinguish:1 correspondence:3 quadratic:1 nonnegative:1 strength:1 software:1 generates:1 aspect:1 extremely:1 performing:1 relatively:2 remain:1 em:1 lp:1 metropolis:2 prob1:1 happens:1 explained:1 ceiling:2 computationally:3 equation:1 know:1 flip:1 adopted:1 generalizes:1 operation:1 decentralized:1 multiplied:1 away:1 alternative:2 slower:1 denotes:3 assumes:1 graphical:5 maintaining:2 const:5 calculating:1 exploit:2 ghahramani:1 unchanged:2 question:1 occurs:1 flipping:1 diagonal:1 traditional:4 link:3 unable:1 thrun:4 entity:1 thank:1 consensus:1 reason:1 assuming:1 index:1 illustration:1 insufficient:1 innovation:2 setup:3 difficult:1 trace:3 reliably:1 unknown:1 perform:1 markov:1 finite:1 situation:1 extended:1 looking:1 communication:1 arbitrary:1 introduced:1 required:3 aerospace:1 jaewon:1 pearl:1 address:3 bar:1 below:3 pattern:1 challenge:2 program:3 including:2 memory:5 belief:3 difficulty:1 rely:1 advanced:1 ijrr:1 numerous:1 prior:1 geometric:1 literature:1 acknowledgement:1 relative:2 loss:2 permutation:23 limitation:1 mounted:2 filtering:2 sufficient:1 principle:1 editor:1 row:3 changed:1 surprisingly:1 wireless:1 transpose:1 offline:3 formal:1 exponentiated:1 aij:2 side:1 characterizing:1 absolute:2 sparse:1 benefit:1 overcome:1 world:10 transition:8 rich:1 cumulative:1 collection:1 commonly:1 bm:16 transaction:1 approximate:3 compact:2 keep:1 doucet:1 b1:3 conclude:1 assumed:1 xi:3 continuous:2 why:1 ca:1 inherently:1 excellent:1 complex:1 domain:1 body:1 fig:16 representative:1 deployed:2 exponential:6 durrant:1 formula:2 specific:4 jensen:3 symbol:1 explored:1 dominates:1 derives:1 exists:1 evidence:1 albeit:1 sequential:1 widelyused:1 importance:1 magnitude:1 ekfs:2 confuse:1 margin:1 surprise:2 subtract:1 logarithmic:1 likely:7 lazy:1 expressed:2 brad:1 tracking:16 springer:1 gary:1 corresponds:2 truth:1 acm:1 ma:1 conditional:2 identity:11 viewed:1 hard:3 specifically:4 averaging:1 experimental:2 attempted:2 formally:2 internal:6 people:5 latter:1 arises:2 inability:1 pertains:1 ongoing:1 evaluate:1 mcmc:2 |
1,943 | 2,765 | Representing Part-Whole Relationships in
Recurrent Neural Networks
Viren Jain2 , Valentin Zhigulin1,2 , and H. Sebastian Seung1,2
1
Howard Hughes Medical Institute and
2
Brain & Cog. Sci. Dept., MIT
[email protected], [email protected], [email protected]
Abstract
There is little consensus about the computational function of top-down
synaptic connections in the visual system. Here we explore the hypothesis that top-down connections, like bottom-up connections, reflect partwhole relationships. We analyze a recurrent network with bidirectional
synaptic interactions between a layer of neurons representing parts and a
layer of neurons representing wholes. Within each layer, there is lateral
inhibition. When the network detects a whole, it can rigorously enforce
part-whole relationships by ignoring parts that do not belong. The network can complete the whole by filling in missing parts. The network
can refuse to recognize a whole, if the activated parts do not conform to
a stored part-whole relationship. Parameter regimes in which these behaviors happen are identified using the theory of permitted and forbidden
sets [3, 4]. The network behaviors are illustrated by recreating Rumelhart
and McClelland?s ?interactive activation? model [7].
In neural network models of visual object recognition [2, 6, 8], patterns of synaptic connectivity often reflect part-whole relationships between the features that are represented
by neurons. For example, the connections of Figure 1 reflect the fact that feature B both
contains simpler features A1, A2, and A3, and is contained in more complex features C1,
C2, and C3. Such connectivity allows neurons to follow the rule that existence of the part
is evidence for existence of the whole. By combining synaptic input from multiple sources
of evidence for a feature, a neuron can ?decide? whether that feature is present. 1
The synapses shown in Figure 1 are purely bottom-up, directed from simple to complex
features. However, there are also top-down connections in the visual system, and there
is little consensus about their function. One possibility is that top-down connections also
reflect part-whole relationships. They allow feature detectors to make decisions using the
rule that existence of the whole is evidence for existence of its parts.
In this paper, we analyze the dynamics of a recurrent network in which part-whole relationships are stored as bidirectional synaptic interactions, rather than the unidirectional
interactions of Figure 1. The network has a number of interesting computational capabilities. When the network detects a whole, it can rigorously enforce part-whole relationships
1
Synaptic connectivity may reflect other relationships besides part-whole. For example, invariances can be implemented by connecting detectors of several instances of the same feature to the
same target, which is consequently an invariant detector of the feature.
C1
C2
C3
B
A1
A2
A3
Figure 1: The synaptic connections (arrows)
of neuron B represent part-whole relationships. Feature B both contains simpler features and is contained in more complex features. The synaptic interactions are drawn
one-way, as in most models of visual object
recognition. Existence of the part is regarded
as evidence for existence of the whole. This
paper makes the interactions bidirectional, allowing the existence of the whole to be evidence for the existence of its parts.
by ignoring parts that do not belong. The network can complete the whole by filling in
missing parts. The network can refuse to recognize a whole, if the activated parts do not
conform to a stored part-whole relationship. Parameter regimes in which these behaviors
happen are identified using the recently developed theory of permitted and forbidden sets
[3, 4].
Our model is closely related to the interactive activation model of word recognition, which
was proposed by McClelland and Rumelhart to explain the word superiority effect studied
by visual psychologists [7]. Here our concern is not to model a psychological effect, but to
characterize mathematically how computations involving part-whole relationships can be
carried out by a recurrent network.
1
Network model
Suppose that we are given a set of part-whole relationships specified by
1, if part i is contained in whole a
a
?i =
0, otherwise
We assume that every whole contains at least one part, and every part is contained in at
least one whole.
The stimulus drives a layer of neurons that detect parts. These neurons also interact with
a layer of neurons that detect wholes. We will refer to part-detectors as ?P-neurons? and
whole-detectors as ?W-neurons.?
The part-whole relationships are directly stored in the synaptic connections between P and
W neurons. If ?ia = 1, the ith neuron in the P layer and the ath neuron in the W layer have
an excitatory interaction of strength ?. If ?ia = 0, the neurons have an inhibitory interaction
of strength ?. Furthermore, the P-neurons inhibit each other with strength ?, and the Wneurons inhibit each other with strength ?. All of these interactions are symmetric, and all
activation functions are the rectification nonlinearity [z]+ = max{z, 0}.
Then the dynamics of the network takes the form
?
?+
X
X
X
W? a + Wa = ??
Pi ?ia ? ?
(1 ? ?ia )Pi ? ?
Wb ? ,
i
i
?+
?
P?i + Pi
= ??
(1)
b6=a
X
a
Wa ?ia ? ?
X
a
(1 ? ?ia )Wa ? ?
X
j6=i
Pj + B i ? .
(2)
where Bi is the input to the P layer from the stimulus. Figure 2 shows an example of a
network with two wholes. Each whole contains two parts. One of the parts is contained in
both wholes.
-?
Wa
excitation ?
-?
inhibition
P1
B1
-?
} W layer
Wb
-?
P2
-?
B2
P3
} P layer
B3
Figure 2: Model in example configuration: ? = {(1, 1, 0), (0, 1, 1)}.
When a stimulus is presented, it activates some of the P-neurons, which activate some of
the W-neurons. The network eventually converges to a stable steady state. We will assume
that ? > 1. In the Appendix, we prove that this leads to unconditional winner-take-all
behavior in the W layer. In other words, no more than one W-neuron can be active at a
stable steady state.
If a single W-neuron is active, then a whole has been detected. Potentially there are also
many P-neurons active, indicating detection of parts. This representation may have different properties, depending on the choice of parameters ?, ?, and ?. As discussed below,
these include rigorous enforcement of part-whole relationships, completion of wholes by
?filling in? missing parts, and non-recognition of parts that do not conform to a whole.
2
Enforcement of part-whole relationships
Suppose that a single W-neuron is active at a stable steady state, so that a whole has been
detected. Part-whole relationships are said to be enforced if the network always ignores
parts that are not contained in the detected whole, despite potentially strong bottom-up
evidence for them. It can be shown that enforcement follows from the inequality
? 2 + ? 2 + ? 2 + 2??? > 1.
(3)
which guarantees that neuron i in the P layer is inactive, if neuron a in the W layer is
active and ?ia = 0. When part-whole relations are enforced, prior knowledge about legal
combinations of parts strictly constrains what may be perceived. This result is proven in
the Appendix, and only an intuitive explanation is given here.
Enforcement is easiest to understand when there is interlayer inhibition (? > 0). In this
case, the active W-neuron directly inhibits the forbidden P-neurons. The case of ? = 0 is
more subtle. Then enforcement is mediated by lateral inhibition in the P layer. Excitatory
feedback from the W-neuron has the effect of counteracting the lateral inhibition between
the P-neurons that belong to the whole. As a result, these P-neurons become strongly
activated enough to inhibit the rest of the P layer.
3
Completion of wholes by filling in missing parts
If a W-neuron is active, it excites the P-neurons that belong to the whole. As a result, even
if one of these P-neurons receives no bottom-up input (Bi = 0), it is still active. We call
this phenomenon ?completion,? and it is guaranteed to happen when
p
(4)
?> ?
The network may thus ?imagine? parts that are consistent with the recognized whole, but
are not actually present in the stimulus. As with enforcement, this condition depends on
top-down connections.
?
In the special case ? = ?, the interlayer excitation between a W-neuron and its P-neurons
exactly cancels out the lateral inhibition between the P-neurons at a steady state. So the recurrent connections effectively vanish, letting the activity of the P-neurons be determined
by their feedforward inputs. When the interlayer excitation is stronger than this, the inequality (4) holds, and completion occurs.
4
Non-recognition of a whole
If there is no interlayer inhibition (? = 0), then a single W-neuron is always active, assuming that there is some activity in the P layer. To see this, suppose for the sake of contradiction that all the W-neurons are inactive. Then they receive no inhibition to counteract the
excitation from the P layer. This means some of them must be active, which contradicts our
assumption. This means that the network always recognizes a whole, even if the stimulus
is very different from any part-whole combination that is stored in the network.
However, if interlayer inhibition is sufficiently strong (large ?), the network may refuse
to recognize a whole. Neurons in the P layer are activated, but there is no activity in the
W layer. Formal conditions on ? can be derived, but are not given here because of space
limitations.
In case of non-recognition, constraints on the P-layer are not enforced. It is possible for the
network to detect a configuration of parts that is not consistent with any stored whole.
5
Example: Interactive Activation model
To illustrate the computational capabilities of our network, we use it to recreate the interactive activation (IA) model of McClelland and Rumelhart. Figure 3 shows numerical
simulations of a network containing three layers of neurons representing strokes, letters,
and words, respectively. There are 16 possible strokes in each of four letter positions. For
each stroke, there are two neurons, one signaling the presence of the stroke and the other
signaling its absence. Letter neurons represent each letter of the alphabet in each of four
positions. Word neurons represent each of 1200 common four letter words.
The letter and word layers correspond to the P and W layers that were introduced previously. There are bidirectional interactions between the letter and word layers, and lateral
inhibition within the layers. The letter neurons also receive input from the stroke neurons,
but this interaction is unidirectional.
Our network differs in two ways from the original IA model. First, all interactions involving
letter and word neurons are symmetric. In the original model, the interactions between the
letter and word layers were asymmetric. In particular, inhibitory connections only ran from
letter neurons to word neurons, and not vice versa. Second, the only nonlinearity in our
model is rectification. These two aspects allow us to apply the full machinery of the theory
of permitted and forbidden sets.
Figure 3 shows the result of presenting the stimulus ?MO M? for four different settings
of parameters. In each of the four cases, the word layer of the network converges to the
same result, detecting the word ?MOON?, which is the closest stored word to the stimulus.
However, the activity in the letter layer is different in the four cases.
input:
P layer
reconstruction
W layer
P layer
reconstruction
W layer
completion
noncompletion
enforcement
non-enforcement
Figure 3: Simulation of 4 different parameter regimes in a letter- word recognition network. Within
each panel, the middle column presents a feature- layer reconstruction based on the letter activity
shown in the left column. W layer activity is shown in the right column. The top row shows the
network state after 10 iterations of the dynamics. The bottom row shows the steady state.
In the left column, the parameters obey the inequality (3), so that part- whole relationships
are enforced. The activity of the letter layer is visualized by activating the strokes corresponding to each active letter neuron. The activated letters are part of the word ?MOON?.
In the top left, the inequality (4) is satisfied, so that the missing ?O? in the stimulus is filled
in. In the bottom left, completion does not occur.
In the simulations of the right column, parameters are such that part- whole relationships are
not enforced. Consequently, the word layer is
much more active. Bottom- up input provides
evidence for several other letters, which is not
suppressed. In the top right, the inequality (4) is
satisfied, so that the missing ?O? in the stimulus
is filled in. In the bottom right, the ?O? neuron
is not activated in the third position, so there is
no completion. However, some letter neurons
for the third position are activated, due to the
input from neurons that indicate the absence of
strokes.
input:
non-recognition event
multi-stability
Figure 4: Simulation of a non- recognition
event and example of multistability.
Figure 4 shows simulations for large ?, deep in
the enforcement regime where non- recognition is a possibility. From one initial condition,
the network converges to a state in which no W neurons are active, a non- recognition. From
another initial condition, the network detects the word ?NORM?. Deep in the enforcement
regime, the top- down feedback can be so strong that the network has multiple stable states,
many of which bear little resemblance to the stimulus at all. This is a problematic aspect
of this network. It can be prevented by setting parameters at the edge of the enforcement
regime.
6
Discussion
We have analyzed a recurrent network that performs computations involving part- whole
relationships. The network can fill in missing parts and suppress parts that do not belong.
These two computations are distinct and can be dissociated from each other, as shown in
Figure 3.
While these two computations can also be performed by associative memory models, they
are not typically dissociable in these models. For example, in the Hopfield model pattern
completion and noise suppression are both the result of recall of one of a finite number of
stereotyped activity patterns.
We believe that our model is more appropriate for perceptual systems, because its behavior
is piecewise linear, due its reliance on rectification nonlinearity. Therefore, analog aspects
of computation are able to coexist with the part-whole relationships. Furthermore, in our
model the stimulus is encoded in maintained synaptic input to the network, rather than as
an initial condition of the dynamics.
A
Appendix: Permitted and forbidden sets
Our mathematical results depend on the theory of permitted and forbidden sets [3, 4], which
is summarized briefly here. The theory isP
applicable to neural networks with rectification
nonlinearity, of the form x? i + xi = [bi + j Wij xj ]+ . Neuron i is said to be active when
xi > 0. For a network of N neurons, there are 2N possible sets of active neurons. For each
active set, consider the submatrix of Wij corresponding to the synapses between active
neurons. If all eigenvalues of this submatrix have real parts less than or equal to unity, then
the active set is said to be permitted. Otherwise the active set is said to be forbidden. A set
is permitted if and only if there exists an input vector b such that those neurons are active
at a stable steady state. Permitted sets can be regarded as memories stored in the synaptic
connections Wij . If Wij is a symmetric matrix, the nesting property holds: every subset of
a permitted set is permitted, and every superset of a forbidden set is forbidden.
The present model can be seen as a general method for storing permitted sets in a recurrent
network. This method introduces a neuron for each permitted set, relying on a unary or
?grandmother cell? representation. In contrast, Xie et al.[9] used lateral inhibition in a
single layer of neurons to store permitted sets. By introducing extra neurons, the present
model achieves superior storage capacity, much as unary models of associative memory [1]
surpass distributed models [5].
A.1
Unconditional winner-take-all in the W layer
The synapses between two W-neurons have strengths
0 ??
?? 0
The eigenvalues of this matrix are ??. Therefore two W-neurons constitute a forbidden set
if ? > 1. By the nesting property, it follows more than two W-neurons is also a forbidden
set, and that the W layer has the unconditional winner-take-all property.
A.2
Part-whole combinations as permitted sets
Theorem 1. Suppose that ? < 1. If ? 2 < ? + (1 ? ?)/k then any combination of k ? 1
parts consistent with a whole corresponds to a permitted set.
Proof. Consider k parts belonging to a whole. They are represented by one W-neuron and
k P-neurons, with synaptic connections given by the (k + 1) ? (k + 1) matrix
??(11T ? I) ?1
M=
,
(5)
?1T
0
where 1 is the k- dimensional vector whose elements are all equal to one. Two eigenvectors
of M are of the form (1T c), and have the same eigenvalues as the 2 ? 2 matrix
??(k ? 1) ?
?k
0
This matrix has eigenvalues less than one when ? 2 < ? + (1 ? ?)/k and ?(k ? 1) + 2 >
0. The other k ? 1 eigenvectors are of the form (dT , 0), where dT 1 = 0. These have
eigenvalues ?. Therefore all eigenvalues of W are less than one if the condition of the
theorem is satisfied.
A.3
Constraints on combining parts
Here, we derive conditions under which the network can enforce the constraint that steady state activity be confined to
parts that constitute a whole.
Theorem 2. Suppose that ? > 0 and ? 2 +? 2 +? 2 +2??? > 1
If a W- neuron is active, then only P- neurons corresponding to
parts contained in the relevant whole can be active at a stable
steady state.
Proof. Consider P- neurons Pi , Pj , and W- neuron Wa . Suppose that ?ia = 1 but ?ja = 0. As shown in Figure 5, the matrix
of connections is given by:
!
0 ?? ?
W = ?? 0 ??
(6)
? ?? 0
Wa
?
Pi
-?
-?
Pj
Figure 5: A set of
one W- neuron and two
P- neurons is forbidden
if one part belongs to
the whole and the other
does not.
This set is permitted if all eigenvalues of W ? I have negative
real parts. The characteristic equation of I ? W is ?3 + b1 ?2 +
b2 ? + b3 = 0, where b1 = 3, b2 = 3 ? ? 2 ? ? 2 ? ? 2 and
b3 = 1?2????? 2 ?? 2 ?? 2 . According to the Routh- Hurwitz theorem, all the eigenvalues
have negative real parts if and only if b1 > 0, b3 > 0 and b1 b2 > b3 . Clearly, the first
condition is always satisfied. The second condition is more restrictive than the third. It is
satisfied only when ? 2 + ? 2 + ? 2 + 2??? < 1. Hence, one of the eigenvalues has a positive
real part when this condition is broken, i.e., when ? 2 +? 2 +? 2 +2??? > 1. By the nesting
property, any larger set of P- neurons inconsistent with the W- neuron is also forbidden.
A.4
Completion of wholes
?
Theorem 3. If ? > ? and a single W- neuron a is active at a steady state, then Pi > 0
for all i such that ?ia = 1.
Proof. Suppose that the detected whole has k parts. At the steady state
Pi =
+
?ia
Bi ? (? ? ? 2 )Ptot
1??
where
Ptot =
X
i
Pi =
k
X
1
Bi ?ia
1 ? ? + (? ? ? 2 )k i=1
(7)
A.5
Preventing runaway
If feedback loops cause the network activity to diverge, then the preceding analyses are not
relevant. Here we give a sufficient condition guaranteeing that runaway instability does not
happen. It is not a necessary condition. Interestingly, the condition implies the condition
of Theorem 1.
Theorem 4. Suppose that P and W obey the dynamics of Eqs. (1) and (2), and define the
objective function
!2
!2
1?? X 2 ? X
1?? X 2 ? X
E =
Wa +
Wa
+
Pi +
Pi
2
2
2
2
a
a
i
i
X
X
X
?
Bi Pi ? ?
Pi Wa ?ia + ?
(1 ? ?ia )Pi Wa .
(8)
i
ia
ia
Then E is a Lyapunov like function that, given ? > ? 2 ?
dynamics to a stable steady state.
1?? 2
N ?1 ,
ensures convergence of the
Proof. (sketch) Differentiation of E with respect to time shows that that E is nonincreasing
in the nonnegative orthant and constant only at steady states of the network dynamics. We
must also show that E is radially unbounded, which is true if the quadratic part of E is
copositive definite. Note thatP
the last term of E is lower-bounded by zero and the previous
term is upper bounded by ? ia Pi Wa . We assume ? > 1. Thus, we can use Cauchy?s
P
P
P
P
2
inequality, i Pi2 ? ( i Pi ) /N , and the fact that a Wa2 ? ( a Wa )2 for Wa ? 0, to
derive
!
X
X
X
X
1 X
1
?
?
+
?N
E?
(
(
Pi )2 ? 2?(
Wa )2 +
Wa
Pi ) ?
Bi Pi . (9)
2
N
a
a
i
i
i
If ? > ? 2 ?
unbounded.
1?? 2
N ?1 ,
the quadratic form in the inequality is positive definite and E is radially
References
[1] E. B. Baum, J. Moody, and F. Wilczek. Internal representations for associative memory. Biol.
Cybern., 59:217?228, 1988.
[2] K. Fukushima. Neocognitron: a self organizing neural network model for a mechanism of pattern
recognition unaffected by shift in position. Biol Cybern, 36(4):193?202, 1980.
[3] R.H. Hahnloser, R. Sarpeshkar, M.A. Mahowald, R.J. Douglas, and H.S. Seung. Digital selection
and analogue amplification coexist in a cortex-inspired silicon circuit. Nature, 405(6789):947?
51, Jun 22 2000.
[4] R.H. Hahnloser, H.S. Seung, and J.-J. Slotine. Permitted and forbidden sets in symmetric
threshold-linear networks. Neural Computation, 15:621?638, 2003.
[5] J.J. Hopfield. Neural networks and physical systems with emergent collective computational
abilities. Proc Natl Acad Sci U S A, 79(8):2554?8, Apr 1982.
[6] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel.
Backpropagation applied to handwritten zip code recognition. Neural Comput., 1:541?551, 1989.
[7] J. L. McClelland and D. E. Rumelhart. An interactive activation model of context effects in letter
perception: Part i. an account of basic findings. Psychological Review, 88(5):375?407, Sep 1981.
[8] M Riesenhuber and T Poggio. Hierarchical models of object recognition in cortex. Nat Neurosci,
2(11):1019?25, Nov 1999.
[9] X. Xie, R.H. Hahnloser, and H. S. Seung. Selectively grouping neurons in recurrent networks of
lateral inhibition. Neural Computation, 14:2627?2646, 2002.
| 2765 |@word briefly:1 middle:1 stronger:1 norm:1 simulation:5 ptot:2 initial:3 configuration:2 contains:4 interestingly:1 activation:6 must:2 numerical:1 happen:4 ith:1 detecting:1 provides:1 simpler:2 unbounded:2 mathematical:1 c2:2 become:1 prove:1 interlayer:5 behavior:5 p1:1 multi:1 brain:1 inspired:1 detects:3 relying:1 little:3 bounded:2 panel:1 circuit:1 what:1 easiest:1 developed:1 wa2:1 finding:1 differentiation:1 guarantee:1 every:4 interactive:5 exactly:1 medical:1 superiority:1 positive:2 acad:1 despite:1 studied:1 bi:7 directed:1 lecun:1 hughes:1 definite:2 differs:1 backpropagation:1 signaling:2 word:18 coexist:2 selection:1 storage:1 context:1 cybern:2 instability:1 missing:7 baum:1 contradiction:1 rule:2 nesting:3 regarded:2 fill:1 stability:1 target:1 suppose:8 imagine:1 viren:2 hypothesis:1 element:1 rumelhart:4 recognition:14 asymmetric:1 bottom:8 ensures:1 inhibit:3 ran:1 thatp:1 broken:1 constrains:1 seung:4 rigorously:2 dynamic:7 depend:1 purely:1 isp:1 sep:1 hopfield:2 emergent:1 represented:2 sarpeshkar:1 alphabet:1 distinct:1 activate:1 detected:4 whose:1 encoded:1 larger:1 otherwise:2 ability:1 associative:3 eigenvalue:9 reconstruction:3 interaction:12 relevant:2 combining:2 ath:1 loop:1 organizing:1 amplification:1 intuitive:1 dissociable:1 partwhole:1 convergence:1 guaranteeing:1 converges:3 object:3 pi2:1 depending:1 recurrent:8 completion:9 illustrate:1 derive:2 excites:1 eq:1 p2:1 strong:3 implemented:1 indicate:1 implies:1 lyapunov:1 closely:1 runaway:2 ja:1 activating:1 mathematically:1 strictly:1 hold:2 sufficiently:1 mo:1 achieves:1 a2:2 perceived:1 proc:1 applicable:1 jackel:1 hubbard:1 vice:1 mit:4 clearly:1 activates:1 always:4 rather:2 derived:1 contrast:1 rigorous:1 suppression:1 detect:3 unary:2 typically:1 relation:1 wij:4 special:1 equal:2 cancel:1 filling:4 stimulus:11 piecewise:1 recognize:3 fukushima:1 detection:1 possibility:2 henderson:1 introduces:1 analyzed:1 unconditional:3 activated:7 natl:1 nonincreasing:1 edge:1 necessary:1 poggio:1 machinery:1 filled:2 psychological:2 instance:1 column:5 wb:2 mahowald:1 introducing:1 hurwitz:1 subset:1 valentin:2 characterize:1 stored:8 diverge:1 connecting:1 moody:1 connectivity:3 reflect:5 satisfied:5 containing:1 account:1 b2:4 summarized:1 depends:1 performed:1 analyze:2 capability:2 unidirectional:2 b6:1 moon:2 characteristic:1 correspond:1 handwritten:1 drive:1 j6:1 unaffected:1 stroke:7 detector:5 synapsis:3 explain:1 sebastian:1 synaptic:12 slotine:1 proof:4 radially:2 recall:1 knowledge:1 subtle:1 actually:1 bidirectional:4 dt:2 xie:2 follow:1 permitted:17 strongly:1 furthermore:2 sketch:1 receives:1 wilczek:1 resemblance:1 believe:1 b3:5 effect:4 true:1 hence:1 symmetric:4 illustrated:1 self:1 maintained:1 excitation:4 steady:12 presenting:1 neocognitron:1 complete:2 performs:1 recently:1 common:1 superior:1 physical:1 winner:3 belong:5 discussed:1 analog:1 refer:1 silicon:1 versa:1 nonlinearity:4 stable:7 cortex:2 inhibition:12 closest:1 forbidden:14 belongs:1 store:1 inequality:7 seen:1 preceding:1 zip:1 recognized:1 multiple:2 full:1 prevented:1 a1:2 involving:3 basic:1 iteration:1 represent:3 confined:1 cell:1 c1:2 receive:2 source:1 extra:1 rest:1 inconsistent:1 call:1 counteracting:1 presence:1 feedforward:1 enough:1 superset:1 xj:1 identified:2 shift:1 inactive:2 whether:1 recreate:1 cause:1 constitute:2 deep:2 eigenvectors:2 visualized:1 mcclelland:4 problematic:1 inhibitory:2 conform:3 four:6 reliance:1 threshold:1 drawn:1 pj:3 douglas:1 enforced:5 counteract:1 letter:20 decide:1 p3:1 decision:1 appendix:3 submatrix:2 layer:39 guaranteed:1 quadratic:2 nonnegative:1 activity:10 strength:5 occur:1 constraint:3 sake:1 aspect:3 inhibits:1 according:1 combination:4 belonging:1 contradicts:1 suppressed:1 unity:1 psychologist:1 invariant:1 rectification:4 legal:1 equation:1 previously:1 eventually:1 mechanism:1 enforcement:11 letting:1 multistability:1 apply:1 obey:2 denker:1 hierarchical:1 enforce:3 appropriate:1 existence:8 original:2 top:9 include:1 recognizes:1 restrictive:1 objective:1 occurs:1 said:4 sci:2 lateral:7 capacity:1 cauchy:1 consensus:2 assuming:1 besides:1 code:1 relationship:21 potentially:2 negative:2 suppress:1 collective:1 allowing:1 upper:1 neuron:76 howard:2 finite:1 riesenhuber:1 orthant:1 introduced:1 specified:1 c3:2 connection:14 boser:1 able:1 below:1 pattern:4 perception:1 regime:6 refuse:3 grandmother:1 max:1 memory:4 explanation:1 analogue:1 ia:18 event:2 representing:4 carried:1 mediated:1 jun:1 prior:1 review:1 recreating:1 bear:1 interesting:1 limitation:1 proven:1 digital:1 sufficient:1 consistent:3 storing:1 pi:18 row:2 excitatory:2 last:1 formal:1 allow:2 understand:1 institute:1 distributed:1 feedback:3 ignores:1 preventing:1 nov:1 active:23 b1:5 xi:2 copositive:1 nature:1 ignoring:2 interact:1 dissociated:1 complex:3 apr:1 stereotyped:1 neurosci:1 arrow:1 whole:62 noise:1 position:5 comput:1 perceptual:1 vanish:1 third:3 down:6 theorem:7 cog:1 a3:2 evidence:7 concern:1 exists:1 grouping:1 effectively:1 nat:1 explore:1 visual:5 contained:7 corresponds:1 hahnloser:3 consequently:2 absence:2 determined:1 surpass:1 invariance:1 indicating:1 selectively:1 internal:1 phenomenon:1 dept:1 biol:2 |
1,944 | 2,766 | Fixing two weaknesses of the Spectral Method
Kevin J. Lang
Yahoo Research
3333 Empire Ave, Burbank, CA 91504
[email protected]
Abstract
We discuss two intrinsic weaknesses of the spectral graph partitioning
method, both of which have practical consequences. The first is that
spectral embeddings tend to hide the best cuts from the commonly used
hyperplane rounding method. Rather than cleaning up the resulting suboptimal cuts with local search, we recommend the adoption of flow-based
rounding. The second weakness is that for many ?power law? graphs, the
spectral method produces cuts that are highly unbalanced, thus decreasing the usefulness of the method for visualization (see figure 4(b)) or
as a basis for divide-and-conquer algorithms. These balance problems,
which occur even though the spectral method?s quotient-style objective
function does encourage balance, can be fixed with a stricter balance constraint that turns the spectral mathematical program into an SDP that can
be solved for million-node graphs by a method of Burer and Monteiro.
1
Background
Graph partitioning is the NP-hard problem of finding a small graph cut subject to the constraint that neither side of the resulting partitioning of the nodes is ?too small?. We will be
dealing with several versions: the graph bisection problem, which requires perfect 12 : 12
balance; the ?-balanced cut problem (with ? a fraction such as 13 ), which requires at least
? : (1 ? ?) balance; and the quotient cut problem, which requires the small side to be large
enough to ?pay for? the edges in the cut. The quotient cut metric is c/ min(a, b), where c
is the cutsize and a and b are the sizes of the two sides of the cut. All of the well-known
variants of the quotient cut metric (e.g. normalized cut [15]) have similar behavior with
respect to the issues discussed in this paper.
The spectral method for graph partitioning was introduced in 1973 by Fiedler and Donath
& Hoffman [6]. In the mid-1980?s Alon & Milman [1] proved that spectral cuts can be at
worst quadratically bad; in the mid 1990?s Guattery & Miller [10] proved that this analysis
is tight by exhibiting a family of n-node graphs whose spectral bisections cut O(n 2/3 )
edges versus the optimal O(n1/3 ) edges. On the other hand, Spielman & Teng [16] have
proved stronger performance guarantees for the special case of spacelike graphs.
The spectral method can be derived by relaxing a quadratic integer program which encodes
the graph bisection problem (see section 3.1). The solution to this relaxation is the ?Fiedler
vector?, or second smallest eigenvector of the graph?s discrete Laplacian matrix, whose
elements xi can be interpreted as an embedding of the graph on the line. To obtain a
(A) Graph with nearly balanced 8-cut
(B) Spectral Embedding
(C) Notional Flow-based Embedding
Figure 1: The spectral embedding hides the best solution from hyperplane rounding.
specific cut, one must apply a ?rounding method? to this embedding. The hyperplane
rounding method chooses one of the n ? 1 cuts which separate the nodes whose x i values
lie above and below some split value x
?.
2
Using flow to find cuts that are hidden from hyperplane rounding
Theorists have long known that the spectral method cannot distinguish between deep cuts
and long paths, and that this confusion can cause it to cut a graph in the wrong direction
thereby producing the spectral method?s worst-case behavior [10]. In this section we will
show by example that even when the spectral method is not fooled into cutting in the wrong
direction, the resulting embedding can hide the best cuts from the hyperplane rounding
method. This is a possible explanation for the frequently made empirical observation (see
e.g. [12]) that hyperplane roundings of spectral embeddings are noisy and therefore benefit
from cleanup with a local search method such as Fiduccia-Matheyses [8].
Consider the graph in figure 1(a), which has a near-bisection cutting 8 edges. For this graph
the spectral method produces the embedding shown in figure 1(b), and recommends that we
make a vertical cut (across the horizontal dimension which is based on the Fiedler vector).
This is correct in a generalized sense, but it is obvious that no hyperplane (or vertical line
in this picture) can possibly extract the optimal 8-edge cut.
Some insight into why spectral embeddings tend to have this problem can be obtained
from the spectral method?s electrical interpretation. In this view the graph is represented
by a resistor network [7]. Current flowing in this network causes voltage drops across the
resistors, thus determining the nodes? voltages and hence their positions. When current
flows through a long series of resistors, it induces a progressive voltage drop. This is what
causes the excessive length of the embeddings of the horizontal girder-like structures which
are blocking all vertical hyperplane cuts in figure 1(b).
If the embedding method were somehow not based on current, but rather on flow, which
does not distinguish between a pipe and a series of pipes, then the long girders could retract
into the two sides of the embedding, as suggested by figure 1(c), and the best cut would
be revealed. Because theoretical flow-like embedding methods such as [14] are currently
not practical, we point out that in cases like figure 1(b), where the spectral method has not
chosen an incorrect direction for the cut, one can use an S-T max flow problem with the
flow running in the recommended direction (horizontally for this embedding) to extract the
good cut even though it is hidden from all hyperplanes.
We currently use two different flow-based rounding methods. A method called MQI looks
for quotient cuts, and is already described in [13]. Another method, that we shall call Midflow, looks for ?-balanced cuts. The input to Midflow is a graph and an ordering of its
nodes (obtained e.g. from a spectral embedding or from the projection of any embedding
onto a line). We divide the graph?s nodes into 3 sets F, L, and U. The sets F and L respectively contain the first ?n and last ?n nodes in the ordering, and U contains the remaining
50-50 balance
ng
s
ro
un
di
Hy
pe
r
pl
an
e
neg-pos split
quotient cut score (cutsize / size of small side)
0.01
ctor
r ve
iedle
of F
0.004
0.003
0.00268
0.00232
Best hyperplane rounding of Fiedler Vector
Best improvement with local search
0.002
0.00138
0.001
60000
80000
Midflow
rounding
beta = 1/4
100000
120000
0.00145
140000
Midflow rounding of Fiedler Vector
beta = 1/3
160000
180000
200000
220000
240000
number of nodes on ?left? side of cut (out of 324800)
Figure 2: A typical example (see section 2.1) where flow-based rounding beats hyperplane
rounding, even when the hyperplane cuts are improved with Fiduccia-Matheyses search.
Note that for this spacelike graph, the best quotient cuts have reasonably good balance.
U = n ? 2?n nodes, which are ?up for grabs?. We set up an S-T max flow problem with
one node for every graph node plus 2 new nodes for the source and sink. For each graph
edge there are two arcs, one in each direction, with unit capacity. Finally, the nodes in F are
pinned to the source and the nodes in L are pinned to sink by infinite capacity arcs. This
max-flow problem can be solved by a good implementation of the push-relabel algorithm
(such as Goldberg and Cherkassky?s hi pr [4]) in time that empirically is nearly linear
with a very good constant factor. Figure 6 shows that solving a MidFlow problem with
hi pr can be 1000 times cheaper than finding a spectral embedding with ARPACK.
When the goal is finding good ?-balanced cuts, MidFlow rounding is strictly more powerful
than hyperplane rounding; from a given node ordering hyperplane rounding chooses the
best of U + 1 candidate cuts, while MidFlow rounding chooses the best of 2U candidates,
including all of those considered by hyperplane rounding. [Similarly, MQI rounding is
strictly more powerful than hyperplane rounding for the task of finding good quotient cuts.]
2.1
A concrete example
The plot in figure 2 shows a number of cuts in a 324,800 node nearly planar graph derived
from a 700x464 pixel downward-looking view of some clouds over some mountains.1 The
y-axis of the plot is quotient cut score; smaller values are better. We note in passing that
the commonly used split point x
? = 0 does not yield the best hyperplane cut. Our main
point is that the two cuts generated by MidFlow rounding of the Fiedler vector (with ? = 13
and ? = 14 ) are nearly twice as good as the best hyperplane cut. Even after the best
hyperplane cut has been improved by taking the best result of 100 runs of a version of
Fiduccia-Matheyses local search, it is still much worse than the cuts obtained by flowbased rounding.
1
The graph?s edges are unweighted but are chosen by a randomized rule which is more likely to
include an edge between two neighboring pixels if they have a similar grey value. Good cuts in the
graph tend to run along discontinuities in the image, as one would expect.
quotient cut score
1
SDP-LB
(smaller is better)
0.1
Scatter plot showing cuts in a
"power-law graph" (Yahoo Groups)
10
100
(worse balance)
1k
10k
size of small side
100k
1M
(better balance)
Figure 3: This scatter plot of cuts in a 1.6 million node collaborative filtering graph shows
a surprising relationship between cut quality and balance (see section 3). The SDP lower
bound proves that all balanced cuts are worse than the unbalanced cuts seen on the left.
2.2
Effectiveness on real graphs and benchmarks
We have found the flow-based Midflow and MQI rounding methods to be highly effective in practice on diverse classes of graphs including space-like graphs and power
law graphs. Results for real-world power law graphs are shown in figure 5. Results
for a number of FE meshes can be found on the Graph Partitioning Archive website
http://staffweb.cms.gre.ac.uk/?c.walshaw/partition, which keeps
track of the best nearly balanced cuts ever found for a number of classic benchmarks. Using flow-based rounding to extract cuts from spectral-type embeddings,
we have found new record cuts for the majority of the largest graphs on the site,
including fe body, t60k, wing, brack2, fe tooth, fe rotor, 598a,
144, wave, m14b, and auto. It is interesting to note that the spectral method previously did not own any of the records for these classic benchmarks, although it could have
if flow-based rounding had been used instead of hyperplane rounding.
3
Finding balanced cuts in ?power law? graphs
The spectral method does not require cuts to have perfect balance, but the denominator in
its quotient-style objective function does reward balance and punish imbalance. Thus one
might expect the spectral method to produce cuts with fairly good balance, and this is what
does happen for the class of spacelike graphs that inform much of our intuition.
However, there are now many economically important ?power law? [5] graphs whose best
quotient cuts have extremely bad balance. Examples at Yahoo include the web graph, social
graphs based on DLBP co-authorship and Yahoo IM buddy lists, a music similarity graph,
and bipartite collaborative filtering graphs relating Yahoo Groups with users, and advertisers with search phrases. To save space we show one scatter plot (figure 3) of quotient cut
scores versus balance that is typical for graphs from this class. We see that apparently there
is a tradeoff between these two quantities, and in fact the quotient cut score gets better as
Figure 4: Left: a social graph with octopus structure as predicted by Chung and Lu [5].
Center: a ?normalized cut? Spectral embedding chops off one tentacle per dimension.
Right: an SDP embedding looks better and is more useful for finding balanced cuts.
balance gets worse, which is exactly the opposite of what one would expect.
When run on graphs of this type, the spectral method (and other quotient cut methods such
as Metis+MQI [13]) wants to chop off tiny pieces. This has at least two bad practical
effects. First, cutting off a tiny piece after paying for a computation on the whole graph
kills the scalability of divide and conquer algorithms by causing their overall run time to
increase e.g. from n log n to n2 . Second, low-dimensional spectral embeddings of these
graphs (see e.g. figure 4(b) are nearly useless for visualization, and are also very poor
inputs for clustering schemes that use a small number of eigenvectors.
These problems can be avoided by solving a semidefinite relaxation of graph bisection that
has a much stronger balance constraint. This SDP (explained in the next section) has a
long history, with connections to papers going all the way back to Donath and Hoffman
[6] (via the concept of ?eigenvalue optimization?). In 2004, Arora, Rao, and Vazirani [14]
proved the best-ever approximation guarantee for graph partitioning by analysing a version
of this SDP which was augmented with certain triangle inequalities that serve much the
same purpose as flow (but which are too expensive to solve for large graphs).
3.1
A semidefinite program which strengthens the balance requirement
The graph bisection problem can be expressed as a Quadratic Integer Program as follows.
There is an n-element column vector x of indicator variables xi , each of which assigns one
node to a particular side of the cut by assuming a value from the set {?1, 1}. With these
indicator values, the objective function 14 xT Lx (where L is the graph?s discrete Laplacian
matrix) works out to be equal to the number of edges crossing the cut. Finally, the requirement of perfect balance is expressed by the constraint xT e = 0, where e is a vector of all
ones. Since this QIP exactly encodes the graph bisection problem, solving it is NP-hard.
The spectral relaxation of this QIP attains solvability by allowing the indicator variables to
assume arbitrary real values, provided that their average squared magnitude is 1.0. After
this change, the objective function 41 xT Lx is now just a lower bound on the cutsize. More
interestingly for the present discussion, the balance contraint xT e = 0 now permits a
qualitatively different kind of balance where a tiny group of nodes moves a long way out
from the origin where the nodes acquire enough leverage to counterbalance everyone else.
For graphs where the best quotient cut has good balance (e.g. meshes) this does not actually
happen, but for graphs whose best quotient cut has bad balance, it does happen, as can be
seen in figure 4(b).
These undesired solutions could be ruled out by requiring the squared magnitudes of the
indicator values to be 1.0 individually instead of on average. However, in one dimension
that would require picking values from the set {?1, 1}, which would once again cause the
problem to be NP-hard. Fortunately, there is a way to escape from this dilemma which was
brought to the attention of the CS community by the Max Cut algorithm of Goemans and
Williamson [9]: if we allow the indicator variables to assume values that are r-dimensional
unit vectors for some sufficiently large r,2 then the program is solvable even with the strict
requirement that every vector has squared length 1.0. After a small change of notation to
reflect the fact that the collected indicator variables now form an n by r matrix X rather
than a vector, this idea results in the nonlinear program
min
1
L ? (XX T ) : diag(XX T ) = e, eT (XX T )e = 0
4
(1)
which becomes an SDP by a change of variables from XX T to the ?Gram matrix? G:
min
1
L ? G : diag(G) = e, eT Ge = 0, G 0
4
(2)
The added constraint G 0 requires G to be positive semidefinite, so that it can be factored
to get back to the desired matrix of indicator vectors X.
3.2
Methods for solving the SDP for large graphs
Interior point methods cannot solve (2) for graphs with more than a few thousand nodes,
but newer methods achieve better scaling by ensuring that all dense n by n matrices have
only an implicit (and approximate) existence. A good example is Helmberg and Rendl?s
program SBmethod [11], which can solve the dual of (2) for graphs with about 50,000
nodes by converting it to an equivalent ?eigenvalue optimization? problem. The output of
SBmethod is a low-rank approximate spectral factorization of the Gram matrix, consisting
of an estimated rank r, plus an n by r matrix X whose rows are the nodes? indicator
vectors.
SBmethod typically produces r-values that are much smaller than n or even
?
2n. Moreover they seem to match the true dimensionality of simple spacelike graphs.
For example, for a 3-d mesh we get r = 4, which is 3 dimensions for the manifold plus one
more dimension for the hypersphere that it is wrapped around.
Burer and Monteiro?s direct low-rank solver SDP-LR scales even better [2]. Surprisingly,
their approach is to essentially forget about the SDP (2) and instead use non-linear programming techniques to solve (1). Specifically, they use an augmented Lagrangian approach to move the constraints into the objective function, which they then minimize using
limited memory BFGS. A follow-up paper [3] provides a theoretical explanation of why
the method does not fall into bad local minima despite the apparent non-convexity of (1).
We have successfully run Burer and Monteiro?s code on large graphs containing more than
a million nodes. We typically run it several times with different small fixed values of r,
and then choose the smallest r which allows the objective function to reach its best known
value. On medium-size graphs this produces estimates for r which are in rough agreement
with those produced by SBmethod. The run time scaling of SDP-LR is compared with
that of ARPACK and hi pr in figure 6.
2
In the original work r = n, but there are theoretical reasons for believing that r ?
enough [3], plus there is empirical evidence that much smaller values work in practice.
?
2n is big
1
0.25
lan
0.2
erp
yp
al
ctr
+H
SDP + Hyperplanes
e
Sp
0.15
SDP + Flow
0.1
0.6
al
ectr
SDP + Hyperplanes
nes
rpla
ype
+H
Sp
SDP + Flow
0.4
0.2
0.05
Social Graph
(DBLP Coauthorship)
20k
30k
40k
size of small side
50k
Social Graph
(Yahoo Instant Messenger)
0
60k
70k
(better balance)
0
1.6
0.04
1.4
0.035
es
1.2
rpl
an
0.03
SDP + Hyperplanes
+H
ype
1
Sp
ect
ral
0.8
0.6
SDP + Flow
0.4
SDP
0.025
0.02
0.015
low
P
SD
0.01
Bipartite Graph
(Yahoo Groups vs Users)
0.2
0
100k 200k 300k 400k 500k 600k 700k 800k 900k 1M
(worse balance)
size of small side
(better balance)
Spectral
0
10k
(worse balance)
quotient cut score
quotient cut score
0.8
es
quotient cut score
quotient cut score (smaller is better)
0.3
0
10k
40k
(worse balance)
90k
Web Graph
(TREC WT10G)
0.005
160k 250k 360k 490k 640k 810k 1M
size of small side
(better balance)
+F
0
100k
200k
(worse balance)
300k
400k
500k
size of small side
600k
700k
800k
(better balance)
Figure 5: Each of these four plots contains two lines showing the results of sweeping a hyperplane through a spectral embedding and through one dimension of an SDP embedding.
In all four cases, the spectral line is lower on the left, and the SDP line is lower on the right,
which means that Spectral produces better unbalanced cuts and the SDP produces better
balanced cuts. Cuts obtained by rounding random 1-d projections of the SDP embedding
using Midflow (to produce ?-balanced cuts) followed by MQI (to improve the quotient cut
score) are also shown; these flow-based cuts are consistently better than hyperplane cuts.
3.3
Results
We have used the minbis program from Burer and Monteiro?s SDP-LR v0.130301
package (with r < 10) to approximately solve (1) for several large graphs including: a
130,000 node social graph representing co-authorship in DBLP; a 1.9 million node social
graph built from the buddy lists of a subset of the users of Yahoo Instant Messenger; a
1.6 million node bipartite graph relating Yahoo Groups and users; and a 1.5 million node
graph made by symmetrizing the TREC WT10G web graph. It is clear from figure 5 that
in all four cases the SDP embedding leads to better balanced cuts, and that flow-based
rounding works better hyperplane rounding. Also, figures 4(b) and 4(c) show 3-d Spectral
and SDP embeddings of a small subset of the Yahoo IM social graph; the SDP embedding
is qualitatively different and arguably better for visualization purposes.
Acknowledgments
We thank Satish Rao for many useful discussions.
References
[1] N. Alon and V.D. Milman. ?1 , isoperimetric inequalities for graphs, and superconcentrators.
Journal of Combinatorial Theory, Series B, 38:73?88, 1985.
100000
CK
PA
run time (seconds)
10000
lem
1000
rob
np
g
lvin
100
e
Eig
h
wit
AR
P
ing
SD
SD
lv
So
So
s
eti
10
hM
it
hw
p
1
ng
ti
ec
Bis
0.1
0.01
100
LR
P-
h
wit
1000
gra
r
i_p
hh
low
wit
idF
gM
lvin
So
10000
100000
1e+06
graph size (nodes + edges)
1e+07
Figure 6: Run time scaling on subsets of the Yahoo IM graph. Finding Spectral and SDP
embeddings with ARPACK and SDP-LR requires about the same amount of time, while
MidFlow rounding with hi pr is about 1000 times faster.
[2] Samuel Burer and Renato D.C. Monteiro. A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Mathematical Programming (series B),
95(2):329?357, 2003.
[3] Samuel Burer and Renato D.C. Monteiro. Local minima and convergence in low-rank semidefinite programming. Technical report, Department of Management Sciences, University of Iowa,
September 2003.
[4] Boris V. Cherkassky and Andrew V. Goldberg. On implementing the push-relabel method for
the maximum flow problem. Algorithmica, 19(4):390?410, 1997.
[5] F. Chung and L. Lu. Average distances in random graphs with given expected degree sequences.
Proceedings of National Academy of Science, 99:15879?15882, 2002.
[6] W.E. Donath and A. J. Hoffman. Lower bounds for partitioning of graphs. IBM J. Res. Develop.,
17:420?425, 1973.
[7] Peter G. Doyle and J. Laurie Snell. Random walks and electric networks, 1984. Mathematical
Association of America; now available under the GPL.
[8] C.M. Fiduccia and R.M. Mattheyses. A linear time heuristic for improving network partitions.
In Design Automation Conference, pages 175?181, 1982.
[9] Michel X. Goemans and David P. Williamson. Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming. J. Assoc. Comput.
Mach., 42:1115?1145, 1995.
[10] Stephen Guattery and Gary L. Miller. On the quality of spectral separators. SIAM Journal on
Matrix Analysis and Applications, 19(3):701?719, 1998.
[11] C. Helmberg. Numerical evaluation of sbmethod. Math. Programming, 95(2):381?406, 2003.
[12] Bruce Hendrickson and Robert W. Leland. A multi-level algorithm for partitioning graphs. In
Supercomputing, 1995.
[13] Kevin Lang and Satish Rao. A flow-based method for improving the expansion or conductance
of graph cuts. In Integer Programming and Combinatorial Optimization, pages 325?337, 2003.
[14] Umesh V. Vazirani Sanjeev Arora, Satish Rao. Expander flows, geometric embeddings and
graph partitioning. In STOC, pages 222?231, 2004.
[15] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 22(8):888?905, 2000.
[16] Daniel A. Spielman and Shang-Hua Teng. Spectral partitioning works: Planar graphs and finite
element meshes. In FOCS, pages 96?105, 1996.
| 2766 |@word economically:1 version:3 stronger:2 grey:1 thereby:1 series:4 contains:2 score:10 daniel:1 interestingly:1 current:3 com:1 surprising:1 lang:2 scatter:3 must:1 mqi:5 mesh:4 numerical:1 happen:3 partition:2 drop:2 plot:6 v:1 intelligence:1 website:1 record:2 lr:5 hypersphere:1 provides:1 math:1 node:30 lx:2 hyperplanes:4 mathematical:3 along:1 direct:1 beta:2 ect:1 incorrect:1 focs:1 leland:1 gpl:1 expected:1 behavior:2 frequently:1 sdp:28 multi:1 decreasing:1 solver:1 becomes:1 provided:1 xx:4 notation:1 moreover:1 medium:1 what:3 mountain:1 cm:1 interpreted:1 kind:1 eigenvector:1 finding:7 guarantee:2 every:2 ti:1 stricter:1 exactly:2 ro:1 jianbo:1 wrong:2 assoc:1 uk:1 partitioning:10 unit:2 producing:1 arguably:1 positive:1 local:6 sd:3 consequence:1 despite:1 mach:1 punish:1 path:1 approximately:1 might:1 plus:4 twice:1 relaxing:1 co:2 factorization:2 limited:1 tentacle:1 adoption:1 bi:1 practical:3 acknowledgment:1 practice:2 burbank:1 empirical:2 projection:2 get:4 cannot:2 onto:1 interior:1 equivalent:1 lagrangian:1 center:1 shi:1 attention:1 wit:3 assigns:1 factored:1 insight:1 rule:1 embedding:21 classic:2 gm:1 user:4 cleaning:1 programming:7 goldberg:2 origin:1 agreement:1 pa:1 element:3 crossing:1 expensive:1 strengthens:1 cut:81 blocking:1 cloud:1 solved:2 electrical:1 worst:2 thousand:1 ordering:3 rotor:1 balanced:11 intuition:1 convexity:1 reward:1 isoperimetric:1 tight:1 solving:5 serve:1 dilemma:1 bipartite:3 basis:1 sink:2 triangle:1 po:1 represented:1 america:1 fiedler:6 effective:1 kevin:2 whose:6 apparent:1 heuristic:1 solve:5 noisy:1 sequence:1 eigenvalue:2 neighboring:1 causing:1 achieve:1 academy:1 scalability:1 convergence:1 requirement:3 produce:8 perfect:3 boris:1 alon:2 ac:1 fixing:1 andrew:1 develop:1 paying:1 quotient:22 predicted:1 c:1 exhibiting:1 direction:5 correct:1 implementing:1 pinned:2 require:2 snell:1 im:3 strictly:2 pl:1 sufficiently:1 considered:1 around:1 smallest:2 purpose:2 combinatorial:2 currently:2 individually:1 largest:1 successfully:1 hoffman:3 eti:1 brought:1 rough:1 rather:3 ck:1 voltage:3 derived:2 improvement:1 consistently:1 rank:5 believing:1 fooled:1 ral:1 ave:1 attains:1 sense:1 typically:2 hidden:2 going:1 monteiro:6 issue:1 pixel:2 dual:1 overall:1 yahoo:12 special:1 fairly:1 equal:1 once:1 ng:2 progressive:1 look:3 nearly:6 excessive:1 report:1 np:4 recommend:1 escape:1 few:1 ve:1 national:1 doyle:1 cheaper:1 algorithmica:1 consisting:1 n1:1 conductance:1 highly:2 evaluation:1 weakness:3 semidefinite:6 edge:10 encourage:1 divide:3 walk:1 ruled:1 desired:1 re:1 theoretical:3 column:1 rao:4 ar:1 phrase:1 subset:3 usefulness:1 rounding:31 satish:3 too:2 chooses:3 randomized:1 siam:1 off:3 picking:1 concrete:1 sanjeev:1 ctr:1 squared:3 again:1 reflect:1 management:1 containing:1 choose:1 possibly:1 worse:8 wing:1 style:2 chung:2 yp:1 michel:1 bfgs:1 automation:1 inc:1 jitendra:1 piece:2 view:2 apparently:1 wave:1 bruce:1 collaborative:2 minimize:1 miller:2 yield:1 helmberg:2 bisection:7 lu:2 produced:1 tooth:1 history:1 wt10g:2 inform:1 reach:1 messenger:2 notional:1 obvious:1 di:1 proved:4 dimensionality:1 satisfiability:1 segmentation:1 actually:1 back:2 follow:1 planar:2 flowing:1 improved:3 though:2 just:1 implicit:1 hand:1 horizontal:2 web:3 nonlinear:2 eig:1 somehow:1 quality:2 empire:1 effect:1 normalized:3 contain:1 concept:1 requiring:1 true:1 hence:1 undesired:1 wrapped:1 chop:2 authorship:2 samuel:2 generalized:1 confusion:1 image:2 umesh:1 empirically:1 million:6 discussed:1 interpretation:1 association:1 relating:2 theorist:1 similarly:1 had:1 similarity:1 v0:1 solvability:1 own:1 hide:3 certain:1 inequality:2 neg:1 seen:2 minimum:2 fortunately:1 converting:1 advertiser:1 recommended:1 stephen:1 ing:1 technical:1 match:1 faster:1 burer:6 long:6 laplacian:2 ensuring:1 rendl:1 variant:1 relabel:2 denominator:1 metric:2 essentially:1 background:1 want:1 else:1 source:2 donath:3 archive:1 strict:1 subject:1 tend:3 expander:1 flow:24 effectiveness:1 seem:1 integer:3 call:1 near:1 leverage:1 revealed:1 split:3 embeddings:9 enough:3 recommends:1 suboptimal:1 opposite:1 idea:1 tradeoff:1 peter:1 passing:1 cause:4 deep:1 useful:2 clear:1 eigenvectors:1 amount:1 mid:2 induces:1 http:1 estimated:1 track:1 per:1 kill:1 diverse:1 discrete:2 shall:1 group:5 four:3 lan:1 erp:1 neither:1 graph:80 relaxation:3 grab:1 fraction:1 run:9 package:1 powerful:2 gra:1 family:1 scaling:3 renato:2 bound:3 hi:4 pay:1 distinguish:2 milman:2 followed:1 quadratic:2 occur:1 constraint:6 idf:1 encodes:2 hy:1 min:3 extremely:1 department:1 metis:1 poor:1 across:2 smaller:5 cutsize:3 newer:1 rob:1 lem:1 explained:1 pr:4 visualization:3 previously:1 discus:1 turn:1 hh:1 symmetrizing:1 ge:1 available:1 permit:1 apply:1 spectral:39 ype:2 save:1 existence:1 original:1 running:1 remaining:1 include:2 clustering:1 guattery:2 instant:2 music:1 prof:1 conquer:2 objective:6 move:2 already:1 quantity:1 added:1 malik:1 september:1 distance:1 separate:1 thank:1 capacity:2 majority:1 manifold:1 collected:1 reason:1 assuming:1 length:2 code:1 useless:1 relationship:1 balance:31 acquire:1 fe:4 robert:1 stoc:1 implementation:1 design:1 allowing:1 imbalance:1 vertical:3 observation:1 ctor:1 contraint:1 arc:2 benchmark:3 finite:1 beat:1 looking:1 ever:2 trec:2 lb:1 arbitrary:1 sweeping:1 community:1 introduced:1 david:1 pipe:2 connection:1 quadratically:1 discontinuity:1 suggested:1 below:1 pattern:1 program:9 built:1 max:4 including:4 explanation:2 everyone:1 memory:1 power:6 indicator:8 solvable:1 counterbalance:1 scheme:1 improve:1 representing:1 ne:1 picture:1 axis:1 arora:2 rpl:1 hm:1 extract:3 auto:1 geometric:1 determining:1 law:6 buddy:2 expect:3 gre:1 interesting:1 filtering:2 versus:2 lv:1 iowa:1 degree:1 tiny:3 ibm:1 row:1 surprisingly:1 last:1 side:12 allow:1 fall:1 taking:1 benefit:1 hendrickson:1 dimension:6 world:1 gram:2 unweighted:1 commonly:2 made:2 qualitatively:2 avoided:1 supercomputing:1 ec:1 social:7 transaction:1 vazirani:2 approximate:2 cutting:3 arpack:3 keep:1 dealing:1 xi:2 search:6 un:1 why:2 reasonably:1 ca:1 improving:2 expansion:1 qip:2 williamson:2 laurie:1 separator:1 electric:1 diag:2 octopus:1 did:1 sp:3 main:1 dense:1 whole:1 big:1 n2:1 body:1 augmented:2 site:1 lvin:2 position:1 resistor:3 comput:1 lie:1 candidate:2 pe:1 hw:1 bad:5 specific:1 xt:4 showing:2 list:2 evidence:1 intrinsic:1 magnitude:2 downward:1 push:2 dblp:2 cherkassky:2 forget:1 likely:1 horizontally:1 expressed:2 hua:1 gary:1 goal:1 hard:3 analysing:1 change:3 typical:2 retract:1 infinite:1 specifically:1 hyperplane:22 shang:1 called:1 teng:2 goemans:2 e:2 coauthorship:1 unbalanced:3 cleanup:1 spielman:2 |
1,945 | 2,767 | Off-policy Learning with Options and
Recognizers
Richard S. Sutton
University of Alberta
Edmonton, AB, Canada
Doina Precup
McGill University
Montreal, QC, Canada
Cosmin Paduraru
University of Alberta
Edmonton, AB, Canada
Anna Koop
University of Alberta
Edmonton, AB, Canada
Satinder Singh
University of Michigan
Ann Arbor, MI, USA
Abstract
We introduce a new algorithm for off-policy temporal-difference learning with function approximation that has lower variance and requires less
knowledge of the behavior policy than prior methods. We develop the notion of a recognizer, a filter on actions that distorts the behavior policy to
produce a related target policy with low-variance importance-sampling
corrections. We also consider target policies that are deviations from
the state distribution of the behavior policy, such as potential temporally
abstract options, which further reduces variance. This paper introduces
recognizers and their potential advantages, then develops a full algorithm
for linear function approximation and proves that its updates are in the
same direction as on-policy TD updates, which implies asymptotic convergence. Even though our algorithm is based on importance sampling,
we prove that it requires absolutely no knowledge of the behavior policy
for the case of state-aggregation function approximators.
Off-policy learning is learning about one way of behaving while actually behaving in another way. For example, Q-learning is an off- policy learning method because it learns
about the optimal policy while taking actions in a more exploratory fashion, e.g., according
to an ?-greedy policy. Off-policy learning is of interest because only one way of selecting
actions can be used at any time, but we would like to learn about many different ways of
behaving from the single resultant stream of experience. For example, the options framework for temporal abstraction involves considering a variety of different ways of selecting
actions. For each such option one would like to learn a model of its possible outcomes suitable for planning and other uses. Such option models have been proposed as fundamental
building blocks of grounded world knowledge (Sutton, Precup & Singh, 1999; Sutton,
Rafols & Koop, 2005). Using off-policy learning, one would be able to learn predictive
models for many options at the same time from a single stream of experience.
Unfortunately, off-policy learning using temporal-difference methods has proven problematic when used in conjunction with function approximation. Function approximation is
essential in order to handle the large state spaces that are inherent in many problem do-
mains. Q-learning, for example, has been proven to converge to an optimal policy in the
tabular case, but is unsound and may diverge in the case of linear function approximation
(Baird, 1996). Precup, Sutton, and Dasgupta (2001) introduced and proved convergence for
the first off-policy learning algorithm with linear function approximation. They addressed
the problem of learning the expected value of a target policy based on experience generated
using a different behavior policy. They used importance sampling techniques to reduce the
off-policy case to the on-policy case, where existing convergence theorems apply (Tsitsiklis & Van Roy, 1997; Tadic, 2001). There are two important difficulties with that approach.
First, the behavior policy needs to be stationary and known, because it is needed to compute
the importance sampling corrections. Second, the importance sampling weights are often
ill-conditioned. In the worst case, the variance could be infinite and convergence would not
occur. The conditions required to prevent this were somewhat awkward and, even when
they applied and asymptotic convergence was assured, the variance could still be high and
convergence could be slow.
In this paper we address both of these problems in the context of off-policy learning for
options. We introduce the notion of a recognizer. Rather than specifying an explicit target
policy (for instance, the policy of an option), about which we want to make predictions, a
recognizer specifies a condition on the actions that are selected. For example, a recognizer
for the temporally extended action of picking up a cup would not specify which hand is to
be used, or what the motion should be at all different positions of the cup. The recognizer
would recognize a whole variety of directions of motion and poses as part of picking the
cup. The advantage of this strategy is not that one might prefer a multitude of different
behaviors, but that the behavior may be based on a variety of different strategies, all of
which are relevant, and we would like to learn from any of them. In general, a recognizer
is a function that recognizes or accepts a space of different ways of behaving and thus, can
learn from a wider range of data.
Recognizers have two advantages over direct specification of a target policy: 1) they are
a natural and easy way to specify a target policy for which importance sampling will be
well conditioned, and 2) they do not require the behavior policy to be known. The latter is
important because in many cases we may have little knowledge of the behavior policy, or a
stationary behavior policy may not even exist. We show that for the case of state aggregation, even if the behavior policy is unknown, convergence to a good model is achieved.
1
Non-sequential example
The benefits of using recognizers in off-policy learning can be most easily seen in a nonsequential context with a single continuous action. Suppose you are given a sequence of
sample actions ai ? [0, 1], selected i.i.d. according to probability density b : [0, 1] 7? ?+
(the behavior density). For example, suppose the behavior density is of the oscillatory
form shown as a red line in Figure 1. For each each action, ai , we observe a corresponding
outcome, zi ? ?, a random variable whose distribution depends only on ai . Thus the behavior density induces an outcome density. The on-policy problem is to estimate the mean
mb of the outcome density. This problem can be solved simply by averaging the sample
outcomes: m? b = (1/n) ?ni=1 zi . The off-policy problem is to use this same data to learn what
the mean would be if actions were selected in some way other than b, for example, if the
actions were restricted to a designated range, such as between 0.7 and 0.9.
There are two natural ways to pose this off-policy problem. The most straightforward way
is to be equally interested in all actions within the designated region. One professes to be
interested in actions selected according to a target density ? : [0, 1] 7? ?+ , which in the
example would be 5.0 between 0.7 and 0.9, and zero elsewhere, as in the dashed line in
12
Probability
density
functions
1.5
Target
policy with
recognizer
1
Target
policy w/o
recognizer
without recognizer
.5
Behavior policy
0
0
Action
0.7
Empirical variances
(average of 200 sample variances)
0.9
1
0
10
with recognizer
100
200
300
400
500
Number of sample actions
Figure 1: The left panel shows the behavior policy and the target policies for the formulations of the problem with and without recognizers. The right panel shows empirical estimates of the variances for the two formulations as a function of the number sample actions.
The lowest line is for the formulation using empirically-estimated recognition probabilities.
Figure 1 (left). The importance- sampling estimate of the mean outcome is
1 n ?(ai )
m? ? = ?
zi .
n i=1 b(ai )
(1)
This approach is problematic if there are parts of the region of interest where the behavior
density is zero or very nearly so, such as near 0.72 and 0.85 in the example. Here the
importance sampling ratios are exceedingly large and the estimate is poorly conditioned
(large variance). The upper curve in Figure 1 (right) shows the empirical variance of this
estimate as a function of the number of samples. The spikes and uncertain decline of the
empirical variance indicate that the distribution is very skewed and that the estimates are
very poorly conditioned.
The second way to pose the problem uses recognizers. One professes to be interested in
actions to the extent that they are both selected by b and within the designated region. This
leads to the target policy shown in blue in the left panel of Figure 1 (it is taller because it
still must sum to 1). For this problem, the variance of (1) is much smaller, as shown in
the lower two lines of Figure 1 (right). To make this way of posing the problem clear, we
introduce the notion of a recognizer function c : A 7? ?+ . The action space in the example
is A = [0, 1] and the recognizer is c(a) = 1 for a between 0.7 and 0.9 and is zero elsewhere.
The target policy is defined in general by
c(a)b(a)
c(a)b(a)
=
.
(2)
?(a) =
?
?x c(x)b(x)
where ? = ?x c(x)b(x) is a constant, equal to the probability of recognizing an action from
the behavior policy. Given ?, m? ? from (1) can be rewritten in terms of the recognizer as
1 n c(ai )
1 n ?(ai ) 1 n c(ai )b(ai ) 1
m? ? = ? zi
= ? zi
= ? zi
(3)
n i=1 b(ai ) n i=1
?
b(ai ) n i=1
?
Note that the target density does not appear at all in the last expression and that the behavior distribution appears only in ?, which is independent of the sample action. If this
constant is known, then this estimator can be computed with no knowledge of ? or b. The
constant ? can easily be estimated as the fraction of recognized actions in the sample. The
lowest line in Figure 1 (right) shows the variance of the estimator using this fraction in
place of the recognition probability. Its variance is low, no worse than that of the exact
algorithm, and apparently slightly lower. Because this algorithm does not use the behavior
density, it can be applied when the behavior density is unknown or does not even exist. For
example, suppose actions were selected in some deterministic, systematic way that in the
long run produced an empirical distribution like b. This would be problematic for the other
algorithms but would require no modification of the recognition-fraction algorithm.
2
Recognizers improve conditioning of off-policy learning
The main use of recognizers is in formulating a target density ? about which we can successfully learn predictions, based on the current behavior being followed. Here we formalize this intuition.
Theorem 1 Let A = {a1 , . . . ak } ? A be a subset of all the possible actions. Consider a
fixed behavior policy b and let ?A be the class of policies that only choose actions from A,
i.e., if ?(a) > 0 then a ? A. Then the policy induced by b and the binary recognizer cA is the
policy with minimum-variance one-step importance sampling corrections, among those in
?A :
"
#
?(ai ) 2
? as given by (2) = arg min Eb
(4)
???A
b(ai )
Proof: Denote ?(ai ) = ?i , b(ai ) = bi . Then the expected variance of the one-step importance sampling corrections is:
" #
2
?i 2
?i
?i
?2
2
Eb
? Eb
= ? bi
? 1 = ? i ? 1,
bi
bi
bi
i
i bi
where the summation (here and everywhere below) is such that the action ai ? A. We
want to find ?i that minimizes this expression, subject to the constraint that ?i ?i = 1.
This is a constrained optimization problem. To solve it, we write down the corresponding
Lagrangian:
?2
L(?i , ?) = ? i ? 1 + ?(? ?i ? 1)
i
i bi
We take the partial derivatives wrt ?i and ? and set them to 0:
?bi
?L
2
= ?i + ? = 0 ? ?i = ?
??i
bi
2
(5)
?L
= ?i ? 1 = 0
?? ?
i
(6)
By taking (5) and plugging into (6), we get the following expression for ?:
?
?
2
bi = 1 ? ? = ?
2?
?i bi
i
By substituting ? into (5) we obtain:
?i =
bi
?i b i
This is exactly the policy induced by the recognizer defined by c(ai ) = 1 iff ai ? A.
We also note that it is advantageous, from the point of view of minimizing the variance of
the updates, to have recognizers that accept a broad range of actions:
Theorem 2 Consider two binary recognizers c1 and c2 , such that ?1 > ?2 . Then the importance sampling corrections for c1 have lower variance than the importance sampling
corrections for c2 .
Proof: From the previous theorem, we have the variance of a recognizer cA :
2
?2i
bi
1
1
1
Var = ? ? 1 = ?
?1 =
?1 = ?1
b
b
b
b
?
?
?
i
i
j?A j
j?A j
i
i
3
Formal framework for sequential problems
We turn now to the full case of learning about sequential decision processes with function
approximation. We use the standard framework in which an agent interacts with a stochastic environment. At each time step t, the agent receives a state st and chooses an action at .
We assume for the moment that actions are selected according to a fixed behavior policy,
b : S ? A ? [0, 1] where b(s, a) is the probability of selecting action a in state s. The behavior policy is used to generate a sequence of experience (observations, actions and rewards).
The goal is to learn, from this data, predictions about different ways of behaving. In this
paper we focus on learning predictions about expected returns, but other predictions can be
tackled as well (for instance, predictions of transition models for options (Sutton, Precup
& Singh, 1999), or predictions specified by a TD-network (Sutton & Tanner, 2005; Sutton,
Rafols & Koop, 2006)). We assume that the state space is large or continuous, and function
approximation must be used to compute any values of interest. In particular, we assume a
space of feature vectors ? and a mapping ? : S ? ?. We denote by ?s the feature vector
associated with s.
An option is defined as a triple o = hI, ?, ?i where I ? S is the set of states in which the
option can be initiated, ? is the internal policy of the option and ? : S ? [0, 1] is a stochastic
termination condition. In the option work (Sutton, Precup & Singh, 1999), each of these
elements has to be explicitly specified and fixed in order for an option to be well defined.
Here, we will instead define options implicitly, using the notion of a recognizer.
A recognizer is defined as a function c : S ? A ? [0, 1], where c(s, a) indicates to what
extent the recognizer allows action a in state s. An important special case, which we treat in
this paper, is that of binary recognizers. In this case, c is an indicator function, specifying
a subset of actions that are allowed, or recognized, given a particular state. Note that
recognizers do not specify policies; instead, they merely give restrictions on the policies
that are allowed or recognized.
A recognizer c together with a behavior policy b generates a target policy ?, where:
b(s, a)c(s, a)
b(s, a)c(s, a)
?(s, a) =
(7)
=
?(s)
?x b(s, x)c(s, x)
The denominator of this fraction, ?(s) = ?x b(s, x)c(s, x), is the recognition probability at s,
i.e., the probability that an action will be accepted at s when behavior is generated according
to b. The policy ? is only defined at states for which ?(s) > 0. The numerator gives the
probability that action a is produced by the behavior and recognized in s. Note that if the
recognizer accepts all state-action pairs, i.e. c(s, a) = 1, ?s, a, then ? is the same as b.
Since a recognizer and a behavior policy can specify together a target policy, we can use
recognizers as a way to specify policies for options, using (7). An option can only be
initiated at a state for which at least one action is recognized, so ?(s) > 0, ?s ? I. Similarly,
the termination condition of such an option, ?, is defined as ?(s) = 1 if ?(s) = 0. In other
words, the option must terminate if no actions are recognized at a given state. At all other
states, ? can be defined between 0 and 1 as desired.
We will focus on computing the reward model of an option o, which represents the expected
total return. The expected values of different features at the end of the option can be
estimated similarly. The quantity that we want to compute is
Eo {R(s)} = E{r1 + r2 + . . . + rT |s0 = s, ?, ?}
where s ? I, experience is generated according to the policy of the option, ?, and T denotes
the random variable representing the time step at which the option terminates according to
?. We assume that linear function approximation is used to represent these values, i.e.
Eo {R(s)} ? ?T ?s
where ? is a vector of parameters.
4
Off-policy learning algorithm
In this section we present an adaptation of the off-policy learning algorithm of Precup,
Sutton & Dasgupta (2001) to the case of learning about options. Suppose that an option?s
policy ? was used to generate behavior. In this case, learning the reward model of the
option is a special case of temporal-difference learning of value functions. The forward
(n)
view of this algorithm is as follows. Let R?t denote the truncated n-step return starting at
(0)
time step t and let yt denote the 0-step truncated return, R?t . By the definition of the n-step
truncated return, we have:
(n)
(n?1)
R?t = rt+1 + (1 ? ?t+1 )R?t+1 .
This is similar to the case of value functions, but it accounts for the possibility of terminating the option at time step t + 1. The ?-return is defined in the usual way:
?
(n)
R?t? = (1 ? ?) ? ?n?1 R?t .
n=1
The parameters of the linear function approximator are updated on every time step proportionally to:
h
i
??? t = R?t? ? yt ?? yt (1 ? ?1 ) ? ? ? (1 ? ?t ).
In our case, however, trajectories are generated according to the behavior policy b. The
main idea of the algorithm is to use importance sampling corrections in order to account
for the difference in the state distribution of the two policies.
Let ?t =
(n)
Rt ,
?(st ,at )
b(st ,at )
be the importance sampling ratio at time step t. The truncated n-step return,
satisfies:
(n)
(n?1)
Rt = ?t [rt+1 + (1 ? ?t+1 )Rt+1 ].
The update to the parameter vector is proportional to:
h
i
??t = Rt? ? yt ?? yt ?0 (1 ? ?1 ) ? ? ? ?t?1 (1 ? ?t ).
The following result shows that the expected updates of the on-policy and off-policy algorithms are the same.
Theorem 3 For every time step t ? 0 and any initial state s,
Eb [??t |s] = E? [??? t |s].
(n)
(n)
Proof: First we will show by induction that Eb {Rt |s} = E? {R?t |s}, ?n (which implies
that Eb {Rt? |s} = E? (R?t? |s}). For n = 0, the statement is trivial. Assuming that it is true for
n ? 1, we have
n
o
h
n
oi
(n?1) 0
(n)
a
0
Eb Rt |s
= ?b(s, a)?Pssa 0 ?(s, a) rss
0 + (1 ? ?(s ))Eb Rt+1 |s
a
=
s0
n
oi
?(s, a) h a
(n?1)
rss0 + (1 ? ?(s0 ))E? R?t+1 |s0
??0 Pssa 0 b(s, a) b(s, a)
a s
=
h
n
oi
n
o
(n?1) 0
(n)
a
a
0
?
?
=
E
R
|s
.
?(s,
a)
P
r
+
(1
?
?(s
))E
R
|s
0
0
?
?
t
?
? ss ss
t+1
a
s0
Now we are ready to prove the theorem?s main statement. Defining ?t to be the set of all
trajectory components up to state st , we have:
n
o t?1
Eb {??t |s} = ? Pb (?|s)Eb (Rt? ? yt )?? yt |? ? ?i (1 ? ?i+1 )
???t
i=0
t?1
=
? ?
???t
? ?
???t
=
?
???t
h
i=0
t?1
=
!
bi Psaiisi+1
!
?i Psaiisi+1
n
o
i
t?1
?i
Eb Rt? |st ? yt ?? yt ? (1 ? ?i+1 )
i=0 bi
h n
o
i
E? R?t? |st ? yt ?? yt (1 ? ?1 )...(1 ? ?t )
i=0
n
o
P? (?|s)E? (R?t? ? yt )?? yt |? (1 ? ?1 )...(1 ? ?t ) = E? ??? t |s .
Note that we are able to use st and ? interchangeably because of the Markov property.
Since we have shown that Eb [??t |s] = E? [??? t |s] for any state s, it follows that the expected
updates will also be equal for any distribution of the initial state s. When learning the model
of options with data generated from the behavior policy b, the starting state distribution with
respect to which the learning is performed, I0 is determined by the stationary distribution
of the behavior policy, as well as the initiation set of the option I. We note also that the
importance sampling corrections only have to be performed for the trajectory since the
initiation of the updates for the option. No corrections are required for the experience prior
to this point. This should generate updates that have significantly lower variance than in
the case of learning values of policies (Precup, Sutton & Dasgupta, 2001).
Because of the termination condition of the option, ?, ?? can quickly decay to zero. To
avoid this problem, we can use a restart function g : S ? [0, 1], such that g(st ) specifies
the extent to which the updating episode is considered to start at time t. Adding restarts
generates a new forward update:
t
??t = (Rt? ? yt )?? yt ? gi ?i ...?t?1 (1 ? ?i+1 )...(1 ? ?t ),
(8)
i=0
where Rt? is the same as above. With an adaptation of the proof in Precup, Sutton &
Dasgupta (2001), we can show that we get the same expected value of updates by applying
this algorithm from the original starting distribution as we would by applying the algorithm
without restarts from a starting distribution defined by I0 and g. We can turn this forward
algorithm into an incremental, backward view algorithm in the following way:
? Initialize k0 = g0 , e0 = k0 ?? y0
? At every time step t:
?t =
?t+1 =
kt+1 =
et+1 =
?t (rt+1 + (1 ? ?t+1 )yt+1 ) ? yt
?t + ??t et
?t kt (1 ? ?t+1 ) + gt+1
??t (1 ? ?t+1 )et + kt+1 ?? yt+1
Using a similar technique to that of Precup, Sutton & Dasgupta (2001) and Sutton & Barto
(1998), we can prove that the forward and backward algorithm are equivalent (omitted due
to lack of space). This algorithm is guaranteed to converge if the variance of the updates is
finite (Precup, Sutton & Dasgupta, 2001). In the case of options, the termination condition
? can be used to ensure that this is the case.
5
Learning when the behavior policy is unknown
In this section, we consider the case in which the behavior policy is unknown. This case
is generally problematic for importance sampling algorithms, but the use of recognizers
will allow us to define importance sampling corrections, as well as a convergent algorithm.
Recall that when using a recognizer, the target policy of the option is defined as:
c(s, a)b(s, a)
?(s, a) =
?(s)
and the recognition probability becomes:
?(s, a) c(s, a)
=
b(s, a)
?(s)
Of course, ?(s) depends on b. If b is unknown, instead of ?(s), we will use a maximum likelihood estimate ?? : S ? [0, 1]. The structure used to compute ?? will have to be compatible
with the feature space used to represent the reward model. We will make this more precise
below. Likewise, the recognizer c(s, a) will have to be defined in terms of the features used
to represent the model. We will then define the importance sampling corrections as:
c(s, a)
? a) =
?(s,
?? (s)
?(s, a) =
We consider the case in which the function approximator used to model the option is actually a state aggregator. In this case, we will define recognizers which behave consistently in
each partition, i.e., c(s, a) = c(p, a), ?s ? p. This means that an action is either recognized
or not recognized in all states of the partition. The recognition probability ?? will have one
entry for every partition p of the state space. Its value will be:
N(p, c = 1)
?? (p) =
N(p)
where N(p) is the number of times partition p was visited, and N(p, c = 1) is the number of times the action taken in p was recognized. In the limit, w.p.1, ?? converges to
?s d b (s|p) ?a c(p, a)b(s, a) where d b (s|p) is the probability of visiting state s from parti? a) = ?(s,
? a)b(s, a) will be a
tion p under the stationary distribution of b. At this limit, ?(s,
? a) = 1). Using Theorem 3, off-policy updates using imwell-defined policy (i.e., ?a ?(s,
portance sampling corrections ?? will have the same expected value as on-policy updates
? Note though that the learning algorithm never uses ?;
? the only quantities needed
using ?.
? which are learned incrementally from data.
are ?,
For the case of general linear function approximation, we conjecture that a similar idea
can be used, where the recognition probability is learned using logistic regression. The
development of this part is left for future work.
Acknowledgements
The authors gratefully acknowledge the ideas and encouragement they have received in this
work from Eddie Rafols, Mark Ring, Lihong Li and other members of the rlai.net group.
We thank Csaba Szepesvari and the reviewers of the paper for constructive comments. This
research was supported in part by iCore, NSERC, Alberta Ingenuity, and CFI.
References
Baird, L. C. (1995). Residual algorithms: Reinforcement learning with function approximation. In
Proceedings of ICML.
Precup, D., Sutton, R. S. and Dasgupta, S. (2001). Off-policy temporal-difference learning with
function approximation. In Proceedings of ICML.
Sutton, R.S., Precup D. and Singh, S (1999). Between MDPs and semi-MDPs: A framework for
temporal abstraction in reinforcement learning. Artificial Intelligence, vol . 112, pp. 181?211.
Sutton,, R.S. and Tanner, B. (2005). Temporal-difference networks. In Proceedings of NIPS-17.
Sutton R.S., Raffols E. and Koop, A. (2006). Temporal abstraction in temporal-difference networks?.
In Proceedings of NIPS-18.
Tadic, V. (2001). On the convergence of temporal-difference learning with linear function approximation. In Machine learning vol. 42, pp. 241-267.
Tsitsiklis, J. N., and Van Roy, B. (1997). An analysis of temporal-difference learning with function
approximation. IEEE Transactions on Automatic Control 42:674?690.
| 2767 |@word advantageous:1 termination:4 r:1 moment:1 initial:2 selecting:3 existing:1 current:1 must:3 partition:4 update:13 stationary:4 greedy:1 selected:7 intelligence:1 c2:2 direct:1 prove:3 introduce:3 expected:9 ingenuity:1 behavior:36 planning:1 alberta:4 td:2 little:1 considering:1 becomes:1 panel:3 lowest:2 what:3 rafols:3 minimizes:1 csaba:1 temporal:11 every:4 exactly:1 control:1 appear:1 treat:1 limit:2 sutton:18 ak:1 initiated:2 might:1 eb:12 specifying:2 range:3 bi:15 block:1 cfi:1 empirical:5 significantly:1 word:1 get:2 rlai:1 context:2 applying:2 restriction:1 equivalent:1 deterministic:1 lagrangian:1 yt:18 reviewer:1 straightforward:1 starting:4 qc:1 estimator:2 parti:1 handle:1 notion:4 exploratory:1 portance:1 updated:1 mcgill:1 target:17 suppose:4 exact:1 us:3 element:1 roy:2 recognition:7 updating:1 solved:1 worst:1 region:3 episode:1 intuition:1 environment:1 reward:4 terminating:1 singh:5 predictive:1 easily:2 k0:2 artificial:1 outcome:6 whose:1 solve:1 s:2 gi:1 advantage:3 sequence:2 net:1 mb:1 adaptation:2 relevant:1 iff:1 poorly:2 convergence:8 r1:1 produce:1 incremental:1 converges:1 ring:1 wider:1 develop:1 montreal:1 pose:3 received:1 involves:1 implies:2 indicate:1 direction:2 filter:1 stochastic:2 require:2 summation:1 correction:12 considered:1 mapping:1 substituting:1 omitted:1 recognizer:24 nonsequential:1 visited:1 successfully:1 rather:1 avoid:1 barto:1 conjunction:1 focus:2 consistently:1 indicates:1 likelihood:1 abstraction:3 i0:2 accept:1 interested:3 arg:1 among:1 ill:1 development:1 constrained:1 special:2 initialize:1 equal:2 never:1 sampling:19 represents:1 broad:1 icml:2 nearly:1 tabular:1 future:1 develops:1 richard:1 inherent:1 unsound:1 recognize:1 ab:3 interest:3 possibility:1 introduces:1 kt:3 partial:1 experience:6 desired:1 e0:1 uncertain:1 instance:2 deviation:1 subset:2 entry:1 recognizing:1 chooses:1 st:8 density:13 fundamental:1 systematic:1 off:19 picking:2 diverge:1 tanner:2 precup:12 together:2 quickly:1 choose:1 worse:1 derivative:1 return:7 li:1 account:2 potential:2 paduraru:1 baird:2 explicitly:1 doina:1 depends:2 stream:2 performed:2 view:3 tion:1 apparently:1 red:1 start:1 aggregation:2 option:34 oi:3 ni:1 variance:21 likewise:1 produced:2 trajectory:3 rss0:1 oscillatory:1 aggregator:1 definition:1 pp:2 resultant:1 proof:4 mi:1 associated:1 proved:1 recall:1 knowledge:5 formalize:1 actually:2 appears:1 restarts:2 awkward:1 specify:5 formulation:3 though:2 hand:1 receives:1 lack:1 incrementally:1 logistic:1 building:1 usa:1 true:1 skewed:1 numerator:1 interchangeably:1 motion:2 empirically:1 conditioning:1 cup:3 ai:18 encouragement:1 automatic:1 similarly:2 gratefully:1 lihong:1 specification:1 recognizers:15 behaving:5 gt:1 initiation:2 binary:3 icore:1 approximators:1 seen:1 minimum:1 somewhat:1 eo:2 recognized:9 converge:2 dashed:1 semi:1 full:2 reduces:1 long:1 equally:1 a1:1 plugging:1 prediction:7 koop:4 regression:1 denominator:1 grounded:1 represent:3 achieved:1 c1:2 want:3 addressed:1 comment:1 induced:2 subject:1 member:1 near:1 easy:1 variety:3 zi:6 reduce:1 decline:1 idea:3 expression:3 action:39 generally:1 proportionally:1 clear:1 induces:1 generate:3 specifies:2 exist:2 problematic:4 estimated:3 blue:1 write:1 dasgupta:7 taller:1 vol:2 group:1 pb:1 prevent:1 backward:2 merely:1 fraction:4 cosmin:1 sum:1 run:1 everywhere:1 you:1 place:1 decision:1 prefer:1 hi:1 followed:1 guaranteed:1 tackled:1 convergent:1 occur:1 constraint:1 generates:2 min:1 formulating:1 conjecture:1 designated:3 according:8 smaller:1 slightly:1 terminates:1 y0:1 modification:1 restricted:1 taken:1 turn:2 needed:2 wrt:1 end:1 rewritten:1 apply:1 observe:1 original:1 denotes:1 ensure:1 recognizes:1 prof:1 g0:1 quantity:2 spike:1 strategy:2 rt:16 usual:1 interacts:1 visiting:1 thank:1 restart:1 extent:3 trivial:1 induction:1 assuming:1 ratio:2 minimizing:1 tadic:2 unfortunately:1 statement:2 policy:79 unknown:5 upper:1 observation:1 markov:1 finite:1 acknowledge:1 behave:1 truncated:4 defining:1 extended:1 precise:1 canada:4 introduced:1 pair:1 distorts:1 required:2 specified:2 accepts:2 learned:2 nip:2 address:1 able:2 below:2 suitable:1 difficulty:1 natural:2 indicator:1 residual:1 representing:1 improve:1 mdps:2 temporally:2 ready:1 professes:2 prior:2 acknowledgement:1 asymptotic:2 proportional:1 proven:2 var:1 approximator:2 triple:1 agent:2 s0:5 elsewhere:2 course:1 compatible:1 supported:1 last:1 tsitsiklis:2 formal:1 allow:1 taking:2 van:2 benefit:1 curve:1 world:1 transition:1 exceedingly:1 forward:4 author:1 reinforcement:2 transaction:1 implicitly:1 satinder:1 eddie:1 continuous:2 learn:8 terminate:1 szepesvari:1 ca:2 posing:1 assured:1 anna:1 main:4 whole:1 allowed:2 edmonton:3 fashion:1 slow:1 position:1 explicit:1 learns:1 theorem:7 down:1 r2:1 decay:1 multitude:1 essential:1 sequential:3 adding:1 importance:18 conditioned:4 michigan:1 simply:1 nserc:1 satisfies:1 goal:1 ann:1 infinite:1 determined:1 averaging:1 total:1 accepted:1 arbor:1 internal:1 mark:1 latter:1 absolutely:1 constructive:1 |
1,946 | 2,768 | An Alternative Infinite Mixture Of Gaussian
Process Experts
Edward Meeds and Simon Osindero
Department of Computer Science
University of Toronto
Toronto, M5S 3G4
{ewm,osindero}@cs.toronto.edu
Abstract
We present an infinite mixture model in which each component comprises a multivariate Gaussian distribution over an input space, and a
Gaussian Process model over an output space. Our model is neatly able
to deal with non-stationary covariance functions, discontinuities, multimodality and overlapping output signals. The work is similar to that by
Rasmussen and Ghahramani [1]; however, we use a full generative model
over input and output space rather than just a conditional model. This allows us to deal with incomplete data, to perform inference over inverse
functional mappings as well as for regression, and also leads to a more
powerful and consistent Bayesian specification of the effective ?gating
network? for the different experts.
1
Introduction
Gaussian process (GP) models are powerful tools for regression, function approximation,
and predictive density estimation. However, despite their power and flexibility, they suffer
from several limitations. The computational requirements scale cubically with the number
of data points, thereby necessitating a range of approximations for large datasets. Another
problem is that it can be difficult to specify priors and perform learning in GP models if we
require non-stationary covariance functions, multi-modal output, or discontinuities.
There have been several attempts to circumvent some of these lacunae, for example [2, 1].
In particular the Infinite Mixture of Gaussian Process Experts (IMoGPE) model proposed
by Rasmussen and Ghahramani [1] neatly addresses the aforementioned key issues. In a
single GP model, an n by n matrix must be inverted during inference. However, if we use a
model composed of multiple GP?s, each responsible only for a subset of the data, then the
computational complexity of inverting an n by n matrix is replaced by several inversions
of smaller matrices ? for large datasets this can result in a substantial speed-up and may
allow one to consider large-scale problems that would otherwise be unwieldy. Furthermore,
by combining multiple stationary GP experts, we can easily accommodate non-stationary
covariance and noise levels, as well as distinctly multi-modal outputs. Finally, by placing a
Dirichlet process prior over the experts we can allow the data and our prior beliefs (which
may be rather vague) to automatically determine the number of components to use.
In this work we present an alternative infinite model that is strongly inspired by the work
in [1], but which uses a different formulation for the mixture of experts that is in the style
presented in, for example [3, 4]. This alternative approach effectively uses posterior re-
PSfrag replacements
PSfrag replacements
xi
zi
yi
N
zi
xi
yi
N
Figure 1: Left: Graphical model for the standard MoE model [6]. The expert indicators
{z(i) } are specified by a gating network applied to the inputs {x(i) }. Right: An alternative
view of MoE model using a full generative model [4]. The distribution of input locations is
now given by a mixture model, with components for each expert. Conditioned on the input
locations, the posterior responsibilities for each mixture component behave like a gating
network.
sponsibilities from a mixture distribution as the gating network. Even if the task at hand
is simply output density estimation or regression, we suggest a full generative model over
inputs and outputs might be preferable to a purely conditional model. The generative approach retains all the strengths of [1] and also has a number of potential advantages, such
as being able to deal with partially specified data (e.g. missing input co-ordinates) and
being able to infer inverse functional mappings (i.e. the input space given an output value).
The generative approach also affords us a richer and more consistent way of specifying
our prior beliefs about how the covariance structure of the outputs might vary as we move
within input space.
An example of the type of generative model which we propose is shown in figure 2. We
use a Dirichlet process prior over a countably infinite number of experts and each expert
comprises two parts: a density over input space describing the distribution of input points
associated with that expert, and a Gaussian Process model over the outputs associated with
that expert. In this preliminary exposition, we restrict our attention to experts whose input space densities are given a single full covariance Gaussian. Even this simple approach
demonstrates interesting performance and capabilities. However, in a more elaborate setup
the input density associated with each expert might itself be an infinite mixture of simpler distributions (for instance, an infinite mixture of Gaussians [5]) to allow for the most
flexible partitioning of input space amongst the experts.
The structure of the paper is as follows. We begin in section 2 with a brief overview of
two ways of thinking about Mixtures of Experts. Then, in section 3, we give the complete
specification and graphical depiction of our generative model, and in section 4 we outline
the steps required to perform Monte Carlo inference and prediction. In section 5 we present
the results of several simple simulations that highlight some of the salient features of our
proposal, and finally in section 6, we discuss our work and place it in relation to similar
techniques.
2
Mixtures of Experts
In the standard mixture of experts (MoE) model [6], a gating network probabilistically
mixes regression components. One subtlety in using GP?s in a mixture of experts model is
that IID assumptions on the data no longer hold and we must specify joint distributions for
each possible assignment of experts to data. Let {x(i) } be the set of d-dimensional input
vectors, {y(i) } be the set of scalar outputs, and {z(i) } be the set of expert indicators which
assign data points to experts.
The likelihood of the outputs, given the inputs, is specified in equation 1, where ? rGP represents the GP parameters of the rth expert, ? g represents the parameters of the gating
network, and the summation is over all possible configurations of indicator variables.
b ?0
?0
a ?0
?x
{z(i) }
i = 1 : Nr
?S f S
S
a ? c b? c
?c
?r
?0 f 0
?0
?r
?x
?0
zir
xr(i)
Yr
r=1:K
v0r
a 0 b0
v1r
a 1 b1
wjr
a w bw
j =1:D
Figure 2: The graphical model representation of the alternative infinite mixture of GP
experts (AiMoGPE) model proposed in this paper. We have used xr(i) to represent the
ith data point in the set of input data whose expert label is r, and Yr to represent the set of
all output data whose expert label is r. In other words, input data are IID given their expert
label, whereas the sets of output data are IID given their corresponding sets of input data.
The lightly shaded boxes with rounded corners represent hyper-hyper parameters that are
fixed (? in the text). The DP concentration parameter ?0 , the expert indicators variables,
{z(i) }, the gate hyperparameters, ?x = {?0 , ?0 , ?c , S}, the gate component parameters,
?rx = {?r , ?r }, and the GP expert parameters, ?rGP = {v0r , v1r , wjr }, are all updated for
all r and j.
X
Y
P ({y(i) }|{x(i) }, ?) = P ({z(i) }|{x(i) }, ?g ) P ({y(i) : z(i) = r}|{x(i) : z(i) = r}, ?rGP )
r
Z
(1)
There is an alternative view of the MoE model in which the experts also generate the inputs,
rather than simply being conditioned on them [3, 4] (see figure 1). This alternative view
employs a joint mixture model over input and output space, even though the objective
is still primarily that of estimating conditional densities i.e. outputs given inputs. The
gating network effectively gets specified by the posterior responsibilities of each of the
different components in the mixture. An advantage of this perspective is that it can easily
accommodate partially observed inputs and it also allows ?reverse-conditioning?, should
we wish to estimate where in input space a given output value is likely to have originated.
For a mixture model using Gaussian Processes experts, the likelihood is given by
X
P ({x(i) },{y(i) }|?) =
P ({z(i) }|?g )?
Z
Y
P ({y(i) : z(i) = r}|{x(i) : z(i) = r}, ?rGP )P ({x(i) : z(i) = r}|? g )
(2)
r
where the description of the density over input space is encapsulated in ? g .
3
Infinite Mixture of Gaussian Processes: A Joint Generative Model
The graphical structure for our full generative model is shown in figure 2. Our generative
process does not produce IID data points and is therefore most simply formulated either as
a joint distribution over a dataset of a given size, or as a set of conditionals in which we
incrementally add data points.To construct a complete set of N sample points from the prior
(specified by top-level hyper-parameters ?) we would perform the following operations:
1. Sample Dirichlet process concentration variable ?0 given the top-level hyperparameters.
2. Construct a partition of N objects into at most N groups using a Dirichlet process. This assignment of objects is denoted by using a set the indicator variables
{z(i) }N
i=1 .
3. Sample the gate hyperparameters ?x given the top-level hyperparameters.
4. For each grouping of indicators {z(i) : z(i) = r}, sample the input space parameters ?rx conditioned on ?x . ?rx defines the density in input space, in our case a
full-covariance Gaussian.
5. Given the parameters ?rx for each group, sample the locations of the input points
Xr ? {x(i) : z(i) = r}.
6. For each group, sample the hyper-parameters for the GP expert associated with
that group, ?rGP .
7. Using the input locations Xr and hyper-parameters ?rGP for the individual groups,
formulate the GP output covariance matrix and sample the set of output values,
Yr ? {y(i) : z(i) = r} from this joint Gaussian distribution.
We write the full joint distribution of our model as follows.
N
x N
GP N
x
P ({x(i) , y(i) }N
i=1 , {z(i) }i=1 , {?r }r=1 , {?r }r=1 , ?0 , ? |N, ?) =
N
Y
HrN P (?rx |?x )P (Xr |?rx )P (?rGP |?)P (Yr |Xr , ?rGP ) + (1 ? HrN )D0 (?rx , ?rGP )
r=1
x
? P ({z(i) }N
i=1 |N, ?0 )P (?0 |?)P (? |?)
(3)
Where we have used the supplementary notation: HrN = 0 if {{z(i) } : z(i) = r} is the
empty set and HrN = 1 otherwise; and D0 (?rx , ?rGP ) is a delta function on an (irrelevant)
dummy set of parameters to ensure proper normalisation.
For the GP components, we use a standard, stationary covariance function of the form
2
1 XD
Q(x(i) , x(h) ) = v0 exp ?
x(i)j ? x(h)j /wj2 + ?(i, h)v1
(4)
j=1
2
The individual distributions in equation 3 are defined as follows1 :
P (?0 |?)
P ({z(i) }N
i=1 |N, ?)
P (?x |?)
= G(?0 ; a?0 , b?0 )
(5)
= PU(?0 , N )
(6)
?1
= N (?0 ; ?x , ?x /f0 )W(??1
0 ; ?0 , f0 ?x /?0 )
G(?c ; a?c , b?c )W(S ?1 ; ?S , fS ?x /?S )
P (?rx |?)
P (Xr |?rx )
P (?rGP |?)
P (Yr |Xr , ?rGP )
1
?0 , ?0 )W(??1
r ;
= N (?r ;
= N (Xr ; ?r , ?r )
?c , S/?c )
= G(v0r ; a0 , b0 )G(v1r ; a1 , b1 )
2
= N (Yr ; ?Qr , ?Q
)
r
YD
j=1
(7)
(8)
(9)
LN (wjr ; aw , bw ) (10)
(11)
We use the notation N , W, G, and LN to represent the normal, the Wishart, the gamma, and the
log-normal distributions, respectively; we use the parameterizations found in [7] (Appendix A). The
notation PU refers to the Polya urn distribution [8].
In an approach similar to Rasmussen [5], we use the input data mean ?x and covariance
?x to provide an automatic normalisation of our dataset. We also incorporate additional
hyperparameters f0 and fS , which allow prior beliefs about the variation in location of ?r
and size of ?r , relative to the data covariance.
4
Monte Carlo Updates
Almost all the integrals and summations required for inference and learning operations
within our model are analytically intractable, and therefore necessitate Monte Carlo approximations. Fortunately, all the necessary updates are relatively straightforward to carry
out using a Markov Chain Monte Carlo (MCMC) scheme employing Gibbs sampling and
Hybrid Monte Carlo. We also note that in our model the predictive density depends on
the entire set of test locations (in input space). This transductive behaviour follows from
the non-IID nature of the model and the influence that test locations have on the posterior
distribution over mixture parameters. Consequently, the marginal predictive distribution at
a given location can depend on the other locations for which we are making simultaneous
predictions. This may or may not be desired. In some situations the ability to incorporate
the additional information about the input density at test time may be beneficial. However,
it is also straightforward to effectively ?ignore? this new information and simply compute a
set of independent single location predictions.
Given a set of test locations {x?(t) }, along with training data pairs {x(i) , y(i) } and top-level
hyper-parameters ?, we iterate through the following conditional updates to produce our
?
predictive distribution for unknown outputs {y(t)
}. The parameter updates are all conjugate
with the prior distributions, except where noted:
1. Update indicators {z(i) } by cycling through the data and sampling one indicator
variable at a time. We use algorithm 8 from [9] with m = 1 to explore new
experts.
2. Update input space parameters.
3. Update GP hyper-params using Hybrid Monte Carlo [10].
4. Update gate hyperparameters. Note that ?c is updated using slice sampling [11].
5. Update DP hyperparameter ?0 using the data augmentation technique of Escobar
and West [12].
6. Resample missing output values by cycling through the experts, and jointly sampling the missing outputs associated with that GP.
We perform some preliminary runs to estimate the longest auto-covariance time, ? max for
our posterior estimates, and then use a burn-in period that is about 10 times this timescale
before taking samples every ?max iterations.2 For our simulations the auto-covariance time
was typically 40 complete update cycles, so we use a burn-in period of 500 iterations and
collect samples every 50.
5
Experiments
5.1
Samples From The Prior
In figure 3 (A) we give an example of data drawn from our model which is multi-modal
and non-stationary. We also use this artificial dataset to confirm that our MCMC algorithm
performs well and is able recover sensible posterior distributions. Posterior histograms for
some of the inferred parameters are shown in figure 3 (B) and we see that they are well
clustered around the ?true? values.
2
This is primarily for convenience. It would also be valid to use all the samples after the burn-in
period, and although they could not be considered independent, they could be used to obtain a more
accurate estimator.
15
40
count
30
20
10
5
10
0
?3
0
?10
?2.5
?2
?1.5
?1
?0
?0.5
0
0.5
1
100
?20
count
80
?30
60
40
?40
20
?50
0
?60
?8
?6
?4
?2
0
2
4
6
8
3
4
5
6
k
10
(A)
(B)
Figure 3: (A) A set of samples from our model prior. The different marker styles are used
to indicate the sets of points from different experts. (B) The posterior distribution of log ? 0
with its true value indicated by the dashed line (top) and the distribution of occupied experts
(bottom). We note that the posterior mass is located in the vicinity of the true values.
5.2
Inference On Toy Data
To illustrate some of the features of our model we constructed a toy dataset consisting of
4 continuous functions, to which we added different levels of noise. The functions used
were:
f1 (a1 ) = 0.25a21 ? 40
2
f2 (a2 ) = ?0.0625(a2 ? 18) + .5a2 + 20
3
f3 (a3 ) = 0.008(a3 ? 60) ? 70
f4 (a4 ) = ? sin(0.25a4 ) ? 6
a1 ? (0 . . . 15)
Noise SD: 7 (12)
a2 ? (35 . . . 60)
Noise SD: 7 (13)
a3 ? (45 . . . 80)
a4 ? (80 . . . 100)
Noise SD: 4 (14)
Noise SD: 2 (15)
The resulting data has non-stationary noise levels, non-stationary covariance, discontinuities and significant multi-modality. Figure 4 shows our results on this dataset along with
those from a single GP for comparison.
We see that in order to account for the entire data set with a single GP, we are forced to infer
an unnecessarily high level of noise in the function. Also, a single GP is unable to capture
the multi-modality or non-stationarity of the data distribution. In contrast, our model seems
much more able to deal with these challenges.
Since we have a full generative model over both input and output space, we are also able
to use our model to infer likely input locations given a particular output value. There
are a number of applications for which this might be relevant, for example if one wanted
to sample candidate locations at which to evaluate a function we are trying to optimise.
We provide a simple illustration of this in figure 4 (B). We choose three output levels
and conditioned on the output having these values, we sample for the input location. The
inference seems plausible and our model is able to suggest locations in input space for a
maximal output value (+40) that was not seen in the training data.
5.3
Regression on a simple ?real-world? dataset
We also apply our model and algorithm to the motorcycle dataset of [13]. This is a commonly used dataset in the GP community and therefore serves as a useful basis for comparison. In particular, it also makes it easy to see how our model compares with standard GP?s
and with the work of [1]. Figure 5 compares the performance of our model with that of a
single GP. In particular, we note that although the median of our model closely resembles
the mean of the single GP, our model is able to more accurately model the low noise level
80
80
Training Data
AiMoGPE
Single GP
60
40
20
60
40
20
0
0
?20
?20
?40
?40
?60
?60
?80
?80
?100
?100
?120
?20
0
20
40
(A)
60
80
100
120
?120
?20
0
20
40
(B)
60
80
100
120
Figure 4: Results on a toy dataset. (A) The training data is shown along with the predictive
mean of a stationary covariance GP and the median of the predictive distribution of our
model. (B) The small dots are samples from the model (160 samples per location) evaluated
at 80 equally spaced locations across the range (but plotted with a small amount of jitter
to aid visualisation). These illustrate the predictive density from our model. The solid the
lines show the ? 2 SD interval from a regular GP. The circular markers at ordinates of 40,
10 and ?100 show samples from ?reverse-conditioning? where we sample likely abscissa
locations given the test ordinate and the set of training data.
on the left side of the dataset. For the remainder of the dataset, the noise level modeled by
our model and a single GP are very similar, although our model is better able to capture
the behaviour of the data at around 30 ms. It is difficult to make an exact comparison to
[1], however we can speculate that our model is more realistically modeling the noise at
the beginning of the dataset by not inferring an overly ?flat? GP expert at that location. We
can also report that our expert adjacency matrix closely resembles that of [1].
6
Discussion
We have presented an alternative framework for an infinite mixture of GP experts. We feel
that our proposed model carries over the strengths of [1] and augments these with the several desirable additional features. The pseudo-likelihood objective function used to adapt
the gating network defined in [1] is not guaranteed to lead to a self-consistent distribution
and therefore the results may depend on the order in which the updates are performed; our
model incorporates a consistent Bayesian density formulation for both input and output
spaces by definition. Furthermore, in our most general framework we are more naturally
able to specify priors over the partitioning of space between different expert components.
Also, since we have a full joint model we can infer inverse functional mappings.
There should be considerable gains to be made by allowing the input density models be
more powerful. This would make it easier for arbitrary regions of space to share the same
covariance structures; at present the areas ?controlled? by a particular expert tend to be
local. Consequently, a potentially undesirable aspect of the current model is that strong
clustering in input space can lead us to infer several expert components even if a single GP
would do a good job of modelling the data. An elegant way of extending the model in this
way might be to use a separate infinite mixture distribution for the input density of each
expert, perhaps incorporating a hierarchical DP prior across the infinite set of experts to
allow information to be shared.
With regard to applications, it might be interesting to further explore our model?s capability
to infer inverse functional mappings; perhaps this could be useful in an optimisation or
active learning context. Finally, we note that although we have focused on rather small
examples so far, it seems that the inference techniques should scale well to larger problems
100
100
Training Data
AiMoGPE
SingleGP
50
Acceleration (g)
Acceleration (g)
50
0
?50
?100
?150
0
0
?50
?100
10
20
30
Time (ms)
(A)
40
50
60
?150
0
10
20
30
Time (ms)
40
50
60
(B)
Figure 5: (A) Motorcycle impact data together with the median of our model?s point-wise
predictive distribution and the predictive mean of a stationary covariance GP model. (B)
The small dots are samples from our model (160 samples per location) evaluated at 80
equally spaced locations across the range (but plotted with a small amount of jitter to aid
visualisation). The solid lines show the ? 2 SD interval from a regular GP.
and more practical tasks.
Acknowledgments
Thanks to Ben Marlin for sharing slice sampling code and to Carl Rasmussen for making
minimize.m available.
References
[1] C.E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In Advances
in Neural Information Processing Systems 14, pages 881?888. MIT Press, 2002.
[2] V. Tresp. Mixture of Gaussian processes. In Advances in Neural Information Processing Systems, volume 13. MIT Press, 2001.
[3] Z. Ghahramani and M. I. Jordan. Supervised learning from incomplete data via an EM approach. In Advances in Neural Information Processing Systems 6, pages 120?127. MorganKaufmann, 1995.
[4] L. Xu, M. I. Jordan, and G. E. Hinton. An alternative model for mixtures of experts. In Advances
in Neural Information Processing Systems 7, pages 633?640. MIT Press, 1995.
[5] C. E. Rasmussen. The infinite Gaussian mixture model. In Advances in Neural Information
Processing Systems, volume 12, pages 554?560. MIT Press, 2000.
[6] R.A. Jacobs, M.I. Jordan, and G.E. Hinton. Adaptive mixture of local experts. Neural Computation, 3, 1991.
[7] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman and
Hall, 2nd edition, 2004.
[8] D. Blackwell and J. B. MacQueen. Ferguson distributions via Polya urn schemes. The Annals
of Statistics, 1(2):353?355, 1973.
[9] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of
Computational and Graphical Statistics, 9:249?265, 2000.
[10] R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report
CRG-TR-93-1, University of Toronto, 1993.
[11] R. M. Neal. Slice sampling (with discussion). Annals of Statistics, 31:705?767, 2003.
[12] M. Escobar and M. West. Computing Bayesian nonparametric hierarchical models. In Practical Nonparametric and Semiparametric Bayesian Statistics, number 133 in Lecture Notes in
Statistics. Springer-Verlag, 1998.
[13] B. W. Silverman. Some aspects of the spline smoothing approach to non-parametric regression
curve fitting. J. Royal Stayt Society. Ser. B, 47:1?52, 1985.
| 2768 |@word inversion:1 seems:3 nd:1 simulation:2 covariance:16 jacob:1 thereby:1 tr:1 solid:2 accommodate:2 carry:2 configuration:1 wj2:1 current:1 must:2 partition:1 wanted:1 update:11 stationary:10 generative:11 yr:6 v1r:3 beginning:1 ith:1 parameterizations:1 toronto:4 location:21 simpler:1 along:3 constructed:1 psfrag:2 fitting:1 multimodality:1 g4:1 abscissa:1 multi:5 inspired:1 automatically:1 begin:1 estimating:1 notation:3 mass:1 follows1:1 marlin:1 pseudo:1 every:2 xd:1 preferable:1 demonstrates:1 ser:1 partitioning:2 before:1 local:2 sd:6 despite:1 yd:1 might:6 burn:3 resembles:2 specifying:1 shaded:1 collect:1 co:1 range:3 practical:2 responsible:1 acknowledgment:1 silverman:1 xr:9 area:1 word:1 refers:1 regular:2 suggest:2 get:1 convenience:1 undesirable:1 gelman:1 context:1 influence:1 missing:3 straightforward:2 attention:1 focused:1 formulate:1 estimator:1 variation:1 updated:2 feel:1 annals:2 exact:1 carl:1 us:2 located:1 observed:1 bottom:1 capture:2 region:1 cycle:1 substantial:1 complexity:1 depend:2 predictive:9 purely:1 meed:1 f2:1 basis:1 vague:1 easily:2 joint:7 lacuna:1 forced:1 effective:1 monte:7 artificial:1 hyper:7 whose:3 richer:1 supplementary:1 plausible:1 larger:1 otherwise:2 ability:1 statistic:5 gp:31 transductive:1 itself:1 jointly:1 timescale:1 advantage:2 propose:1 maximal:1 remainder:1 relevant:1 combining:1 motorcycle:2 flexibility:1 realistically:1 description:1 zir:1 qr:1 empty:1 requirement:1 extending:1 produce:2 escobar:2 ben:1 object:2 illustrate:2 polya:2 b0:2 job:1 strong:1 edward:1 c:1 indicate:1 closely:2 f4:1 adjacency:1 require:1 behaviour:2 assign:1 f1:1 clustered:1 preliminary:2 summation:2 crg:1 hold:1 around:2 considered:1 hall:1 normal:2 exp:1 mapping:4 vary:1 a2:4 resample:1 estimation:2 encapsulated:1 label:3 tool:1 mit:4 gaussian:14 rather:4 occupied:1 probabilistically:1 longest:1 modelling:1 likelihood:3 contrast:1 rgp:12 inference:8 cubically:1 entire:2 typically:1 a0:1 ferguson:1 relation:1 visualisation:2 issue:1 aforementioned:1 flexible:1 denoted:1 smoothing:1 marginal:1 construct:2 f3:1 having:1 sampling:7 chapman:1 placing:1 represents:2 unnecessarily:1 thinking:1 report:2 spline:1 employ:1 primarily:2 composed:1 gamma:1 individual:2 replaced:1 consisting:1 replacement:2 bw:2 attempt:1 stationarity:1 normalisation:2 circular:1 mixture:27 chain:3 accurate:1 integral:1 necessary:1 incomplete:2 re:1 desired:1 plotted:2 instance:1 modeling:1 retains:1 assignment:2 subset:1 osindero:2 aw:1 params:1 thanks:1 density:14 probabilistic:1 rounded:1 together:1 augmentation:1 choose:1 wishart:1 necessitate:1 corner:1 expert:47 style:2 toy:3 account:1 potential:1 speculate:1 depends:1 performed:1 view:3 responsibility:2 recover:1 capability:2 simon:1 minimize:1 spaced:2 bayesian:5 accurately:1 iid:5 carlo:7 rx:10 m5s:1 simultaneous:1 sharing:1 definition:1 naturally:1 associated:5 gain:1 dataset:12 supervised:1 specify:3 modal:3 formulation:2 evaluated:2 box:1 strongly:1 though:1 furthermore:2 just:1 hand:1 overlapping:1 marker:2 incrementally:1 defines:1 perhaps:2 indicated:1 true:3 analytically:1 vicinity:1 neal:3 deal:4 sin:1 during:1 self:1 noted:1 m:3 trying:1 outline:1 complete:3 necessitating:1 performs:1 wise:1 functional:4 overview:1 conditioning:2 volume:2 rth:1 significant:1 gibbs:1 automatic:1 neatly:2 dot:2 specification:2 f0:3 longer:1 depiction:1 v0:1 add:1 pu:2 multivariate:1 posterior:9 perspective:1 irrelevant:1 reverse:2 verlag:1 yi:2 inverted:1 seen:1 additional:3 fortunately:1 determine:1 period:3 signal:1 dashed:1 full:9 multiple:2 mix:1 infer:6 d0:2 desirable:1 technical:1 adapt:1 equally:2 a1:3 controlled:1 impact:1 prediction:3 regression:6 optimisation:1 iteration:2 represent:4 histogram:1 proposal:1 whereas:1 conditionals:1 semiparametric:1 interval:2 median:3 modality:2 tend:1 elegant:1 incorporates:1 jordan:3 easy:1 iterate:1 carlin:1 zi:2 restrict:1 suffer:1 f:2 useful:2 amount:2 nonparametric:2 augments:1 generate:1 affords:1 delta:1 overly:1 dummy:1 per:2 write:1 hyperparameter:1 group:5 key:1 salient:1 drawn:1 v1:1 run:1 inverse:4 powerful:3 jitter:2 place:1 almost:1 appendix:1 guaranteed:1 strength:2 flat:1 lightly:1 aspect:2 speed:1 urn:2 relatively:1 department:1 conjugate:1 smaller:1 beneficial:1 across:3 em:1 making:2 ln:2 equation:2 describing:1 discus:1 count:2 serf:1 available:1 gaussians:1 operation:2 apply:1 hierarchical:2 alternative:9 gate:4 top:5 dirichlet:5 ensure:1 clustering:1 graphical:5 a4:3 ghahramani:4 society:1 move:1 objective:2 added:1 parametric:1 concentration:2 nr:1 cycling:2 amongst:1 dp:3 unable:1 separate:1 sensible:1 code:1 modeled:1 illustration:1 difficult:2 setup:1 aimogpe:3 potentially:1 proper:1 stern:1 unknown:1 perform:5 allowing:1 datasets:2 markov:3 macqueen:1 behave:1 situation:1 hinton:2 arbitrary:1 community:1 inferred:1 ordinate:3 inverting:1 pair:1 moe:4 specified:5 required:2 blackwell:1 discontinuity:3 address:1 able:10 hrn:4 challenge:1 max:2 optimise:1 royal:1 belief:3 power:1 hybrid:2 circumvent:1 indicator:8 scheme:2 brief:1 auto:2 tresp:1 text:1 prior:12 relative:1 lecture:1 highlight:1 interesting:2 limitation:1 consistent:4 rubin:1 share:1 rasmussen:6 side:1 allow:5 taking:1 distinctly:1 slice:3 regard:1 curve:1 valid:1 world:1 commonly:1 made:1 adaptive:1 employing:1 far:1 ignore:1 countably:1 confirm:1 active:1 b1:2 xi:2 continuous:1 nature:1 noise:11 hyperparameters:6 edition:1 xu:1 west:2 elaborate:1 aid:2 inferring:1 comprises:2 wish:1 originated:1 a21:1 candidate:1 unwieldy:1 wjr:3 gating:8 a3:3 grouping:1 intractable:1 incorporating:1 effectively:3 conditioned:4 easier:1 simply:4 likely:3 explore:2 partially:2 scalar:1 subtlety:1 springer:1 conditional:4 formulated:1 consequently:2 exposition:1 acceleration:2 shared:1 considerable:1 infinite:14 except:1 morgankaufmann:1 incorporate:2 evaluate:1 mcmc:2 |
1,947 | 2,769 | Fast biped walking with a reflexive controller
and real-time policy searching
3
Tao Geng1 , Bernd Porr2 and Florentin W?org?otter1,3
1
Dept. Psychology, University of Stirling, UK.
[email protected]
2
Dept. Electronics & Electrical Eng., University of Glasgow, UK.
[email protected]
Bernstein Centre for Computational Neuroscience, University of G?ottingen
[email protected]
Abstract
In this paper, we present our design and experiments of a planar biped
robot (?RunBot?) under pure reflexive neuronal control. The goal of this
study is to combine neuronal mechanisms with biomechanics to obtain
very fast speed and the on-line learning of circuit parameters. Our controller is built with biologically inspired sensor- and motor-neuron models, including local reflexes and not employing any kind of position or
trajectory-tracking control algorithm. Instead, this reflexive controller
allows RunBot to exploit its own natural dynamics during critical stages
of its walking gait cycle. To our knowledge, this is the first time that dynamic biped walking is achieved using only a pure reflexive controller.
In addition, this structure allows using a policy gradient reinforcement
learning algorithm to tune the parameters of the reflexive controller in
real-time during walking. This way RunBot can reach a relative speed of
3.5 leg-lengths per second after a few minutes of online learning, which
is faster than that of any other biped robot, and is also comparable to the
fastest relative speed of human walking. In addition, the stability domain
of stable walking is quite large supporting this design strategy.
1
Introduction
Building and controlling fast biped robots demands a deeper understanding of biped walking than for slow robots. While slow robots may walk statically, fast biped walking has to
be dynamically balanced and more robust as less time is available to recover from disturbances [1]. Although many biped robots have been developed using various technologies
in the past 20 years, their walking speeds are still not comparable to that of their counterpart in nature, humans. Most of the successful biped robots have commonly used the ZMP
(Zero Moment Point, [2]) as the criterion for stability control and motion generation. The
ZMP is the point on the ground where the total moment generated by gravity and inertia
equals zero. This measure has two deficiencies in the case of high-speed walking. First,
the ZMP must always reside in the convex hull of the stance foot, and the stability margin
is measured by the minimal distance between the ZMP and the edge of the foot. To ensure
an appropriate stability margin, the foot has to be flat and large, which will deteriorate the
robot?s performance and pose great difficulty during fast walking. This difficulty can be
shown clearly when humans try to walk with skies or swimming fins. Second, the ZMP
criterion does not permit rotation of the stance foot at the heel or the toe, which, however,
can amount to up to eighty percent of a normal human walking gait, and is important and
inevitable in fast biped walking.
On the other hand, sometimes dynamic biped walking can be achieved without considering
any stability criterion such as the ZMP. For example, passive biped robots can walk down
a shallow slope without sensing or control. Some researchers have proposed approaches to
equip a passive biped with actuators to improve its performance and drive it to walk on the
flat ground [3] [4]. Nevertheless, these passive bipeds excessively depend on their natural
dynamics for gait generation, which, while making their gaits efficient in energy, also limits
their walking rate to be very slow.
In this study, we will show that, with a properly designed mechanical structure, a novel,
pure reflexive controller, and an online policy gradient reinforcement learning algorithm,
our biped robot can attain a fast walking speed of 3.5 leg-lengths per second. This makes it
faster than any other biped robot we know. Though not a passive biped, it exploits its own
natural dynamics during some stages of its walking gait, greatly simplifying the necessary
control structures.
2
The robot
RunBot (Fig. 1) is 23 cm high, foot to hip joint axis. It has four joints: left hip, right hip,
left knee, right knee. Each joint is driven by a modified RC servo motor. A hard mechanical
stop is installed on the knee joints, preventing it from going into hyperextension. Each foot
is equipped with a modified piezo transducer to sense ground contact events. Similar to
other approaches [1], we constrain the robot only in the sagittal plane by a boom of one
meter length freely rotating in its joints (planar robot). This assures that RunBot can still
very easily trip and fall in the sagittal plane.
Figure 1: A): The robot, RunBot, and its boom structure. All three orthogonal axis of
the boom can rotate freely. B) Illustration of a walking step of RunBot. C) A series of
sequential frames of a walking gait cycle. The interval between every two adjacent frames
is 33 ms. Note that, during the time between frame (8) and frame (13), which is nearly one
third of the duration of a step, the motor voltage of all four joints remain to be zero, and the
whole robot is moving passively. At the time of frame (13), the swing leg touches the floor
and a next step begins.
Since we intended to exploit RunBot?s natural dynamics during some stages of its gait
cycle; similar to passive bipeds; its foot bottom is also curved with a radius equal to half the
leg-length (with a too large radius, the tip of the foot may strike the ground during its swing
phase). During the stance phase of such a curved foot, always only one point touches the
ground, thus allowing the robot to roll passively around the contact point, which is similar
to the rolling action of human feet facilitating fast walking.
The most important consideration in the mechanical design of our robot is the location
of its center of mass. About seventy percent of the robot?s weight is concentrated on its
trunk. The parts of the trunk are assembled in such a way that its center of mass is located
before the hip axis (Fig. 1 A). The effect of this design is illustrated in Fig. 1 B. As shown,
one walking step includes two stages, the first from (1) to (2), the second from (2) to (3).
During the first stage, the robot has to use its own momentum to rise up on the stance leg.
When walking at a low speed, the robot may have not enough momentum to do this. So,
the distance the center of mass has to cover in this stage should be as short as possible,
which can be fulfilled by locating the center of mass of the trunk forward. In the second
stage, the robot just falls forward naturally and catches itself on the next stance leg. Then
the walking cycle is repeated. The figure also shows clearly the rolling movement of the
curved foot of the stance leg. A stance phase begins with the heel touching ground, and
terminates with the toe leaving ground.
In summary, our mechanical design of RunBot has following special features that distinguish it from other powered biped robots and facilitate high-speed walking and exploitation of natural dynamics: (a) Small curved feet allowing for rolling action; (b) Unactuated,
hence, light ankles; (c) Light-weight structure; (d) Light and fast motors; (e) Proper mass
distribution of the limbs; (f) Properly positioned mass center of the trunk.
3
The neural structure of our reflexive controller
The reflexive walking controller of RunBot follows a hierarchical structure (Fig. 2). The
bottom level is the reflex circuit local to the joints, including motor-neurons and angle
sensor neurons involved in the joint reflexes. The top level is a distributed neural network
consisting of hip stretch receptors and ground contact sensor neurons, which modulate the
local reflexes of the bottom level. Neurons are modelled as non-spiking neurons simulated
on a Linux PC, and communicated to the robot via the DA/AD board. Though somewhat
simplified, they still retain some of the prominent neuronal characteristics.
3.1
Model neuron circuit of the top level
The joint coordination mechanism in the top level is implemented with the neuron circuit
illustrated in Fig. 2. While other biologically inspired locomotive models and robots use
two stretch receptors on each leg to signal the attaining of the leg?s AEP (Anterior Extreme
Position) and PEP (Posterior Extreme Position) respectively, our robot has only one stretch
receptor on each leg to signal the AEA (Anterior Extreme Angle) of its hip joint. Furthermore, the function of the stretch receptor on our robot is only to trigger the extensor reflex
on the knee joint of the same leg, rather than to implicitly reset the phase relations between
different legs as in the case of Cruse?s model. As the hip joint approaches the AEA, the
output of the stretch receptors for the left (AL) and the right hip (AR) are increased as:
?1
?AL =
1 + e?AL (?AL ??)
(1)
?1
?AL =
1 + e?AR (?AR ??)
(2)
Where ? is the real time angular position of the hip joint, ?AL and ?AR are the hip anterior
extreme angles whose values are tuned by hand, ?AL and ?AR are positive constants. This
Figure 2: The neuron model of reflexive controller on RunBot.
model is inspired by a sensor neuron model presented in [5] that is thought capable of
emulating the response characteristics of populations of sensor neurons in animals. Another
kind of sensor neuron incorporated in the top level is the ground contact sensor neuron,
which is active when the foot is in contact with the ground. Its output, similar to that of the
stretch receptors, changes according to:
?1
?GL =
1 + e?GL (?GL ?VL +VR )
(3)
?1
(4)
?GR =
1 + e?GR (?GR ?VR +VL )
Where VL and VR are the output voltage signals from piezo sensors of the left foot and right
foot respectively, ?GL and ?GR work as thresholds, ?GL and ?GR are positive constants.
3.2
Neural circuit of the bottom level
The bottom-level reflex system of our robot consists of reflexes local to each joint (Fig. 2).
The neuron module for one reflex is composed of one angle sensor neuron and the motorneuron it contacts. Each joint is equipped with two reflexes, extensor reflex and flexor
reflex, both are modelled as a monosynaptic reflex, that is, whenever its threshold is exceeded, the angle sensor neuron directly excites the corresponding motor-neuron. This
direct connection between angle sensor neuron and motor-neuron is inspired by a reflex
described in cockroach locomotion [6]. In addition, the motor-neurons of the local reflexes
also receive an excitatory synapse and an inhibitory synapse from the neurons of the top
level, by which the top level can modulate the bottom-level reflexes. Each joint has two
angle sensor neurons, one for the extensor reflex, and the other for the flexor reflex (Fig. 2).
Their models are similar to that of the stretch receptors described above. The extensor
angle sensor neuron changes its output according to:
?1
?ES = 1 + e?ES (?ES ??)
(5)
where ? is the real time angular position obtained from the potentiometer of the joint. ?ES
is the threshold of the extensor reflex and ?ES a positive constant. Likewise, the output of
Table 1: Parameters of neurons for hip- and knee joints. For meaning of the subscripts, see
Fig. 2.
Hip Joints
Knee Joints
?EM
5
5
?F M
5
5
?ES
2
2
?F S
2
2
Table 2: Parameters of stretch receptors and ground contact sensor neurons.
?GL (v)
2
?GR (v)
2
?AL (deg)
= ?ES
?AR (deg)
= ?ES
?GL
2
?GR
2
?AL
2
?AR
2
the flexor sensor neuron is modelled as:
?F S = (1 + e?F S (???F S ) )?1
(6)
with ?F S and ?F S similar as above. The direction of extensor on both hip and knee joints
is forward while that of flexors is backward.
It should be particularly noted that the thresholds of the sensor neurons in the reflex modules do not work as desired positions for joint control, because our reflexive controller does
not involve any exact position control algorithms that would ensure that the joint positions
converge to a desired value. The motor-neuron model is adapted from one used in the neural controller of a hexapod simulating insect locomotion [7]. The state and output of each
extensor motor-neuron is governed by equations 7,8 [8] (that of flexor motor-neurons are
similar):
X
dy
= ?y +
? X ?X
(7)
?
dt
?1
uEM = 1 + e?EM ?y
(8)
Where y represents the mean membrane potential of the neuron. Equation 8 is a sigmoidal
function that can be interpreted as the neuron?s short-term average firing frequency, ?EM
is a bias constant that controls the firing threshold. ? is a time constant associated with the
passive properties of the cell membrane [8], ?X represents the connection strength from
the sensor neurons and stretch receptors to the motor-neuron neuron (Fig. 2). ?X represents
the output of the sensor-neurons and stretch receptors that contact this motor-neuron (e.g.,
?ES , ?AL , ?GL , etc.)
Note that, on RunBot, the output value of the motor-neurons, after multiplication by a gain
coefficient, is sent to the servo amplifier to directly drive the joint motor. The voltage of
joint motor is determined by
M otor V oltage = MAM P GM (sEM uEM + sF M uF M ),
(9)
where MAM P represents the magnitude of the servo amplifier, which is 3 on RunBot. GM
stands for output gain of the motor-neurons. sEM and sF M are signs for the motor voltage
of flexor and extensor, being +1 or -1, depending on the the hardware of the robot. uEM
and uF M are the outputs of the motor-neurons.
4
Robot walking experiments
The model neuron parameters chosen jointly for all experiments are listed in Tables 1 and
2. The time constants ?i of all neurons take the same value of 3ms. The weights of all
Table 3: Fixed parameters of the knee joints.
Knee Joints
?ES,k (deg)
175
?F S,k (deg)
110
GM,k
0.9GM,h
the inhibitory connections are set to -10, except those between sensor-neurons and motorneurons, which are -30, and those between stretch receptors and flexor motor-neurons,
which are -15. The weights of all excitatory connections are 10, except those between
stretch receptors and extensor motor-neurons, which are 15. Because the movements of the
knee joints is needed mainly for timely ground clearance without big contributions to the
walking speed, we set their neuron parameters to fixed values (see Table 3 ). We also fix the
threshold of the flexor sensor neurons of the hips (?F S,h ) to 85? . So, in the experiments
described below, we only need to tune the two parameters of the hip joints, the threshold
of the extensor sensor neurons (?ES,h ) and the gain of the motor-neurons (GM,h ), which
work together to determine the walking speed and the important gait properties of RunBot.
In RunBot, ?ES,h determines roughly the stride length (not exactly, because the hip joint
moves passively after passing ?ES,h ), while GM,h is proportional to the angular velocity
of the motor on the hip joint.
In experiments of walking on a flat floor, surprisingly, we have found that stable gaits can
appear in a considerably large range of the parameters ?ES,h and GM,h (Fig. 3A).
Figure 3: (A), The range of the two parameters, GM,h and ?ES,h , in which stable gaits
appear. The maximum permitted value of GM,h is 3.5 (higher value will destroy the motor
of the hip joint). See text for more information. (B), Phase diagrams of hip joint position
and knee joint position of one leg during the whole learning process. The smallest orbit is
the fastest walking gait. (C), The walking speed of RunBot during the learning process.
In RunBot, passive movements appear on two levels, at the single joint level and at the
whole robot level. Due to the high gear ratio of the joint motors, the passive movement of
each joint is not very large. Whereas the effects of passive movements at the whole robot
level can be clearly seen especially when RunBot is walking at a medium or slow speed
(Fig. 1 C).
4.1
Policy gradient searching for fast walking gaits
In order to get a fast walking speed, the biped robot should have a long stride length, a short
swing time, and a short double support phase [1]. In RunBot, because the phase-switching
of its legs is triggered immediately by ground contact signals, its double support phase is so
short (usually less than 30 ms) that it is negligible. A long stride length and a short swing
time are mutually exclusive. Because there are no position or trajectory tracking control
in RunBot, it is impossible to control its walking speed directly or explicitly. However,
knowing that runBot?s walking gait is determined by only two parameters, ?ES,h and GM,h
(Fig. 3A), we formulate RunBot?s fast walking control as a policy gradient reinforcement
learning problem by considering each point in the the parameter space (Fig. 3A) as an
open-loop policy that can be executed by RunBot in real-time.
Our approach is modified from [9]. It starts from an initial parameter vector ? = (?1 , ?2 )
(here ?1 and ?2 represent GM,h and ?ES,h , respectively) and proceeds to evaluate following 5 polices near ?: (?1 , ?2 ), (?1 , ?2 + 2 ), (?1 ? 1 , ?2 ), (?1 , ?2 ? 2 ), (?1 + 1 , ?2 ), where
each j is a adaptive value that is small relative to ?j . The evaluation of each policy generates a score that is a measure of the speed of the gait described by that policy. We use these
scores to construct an adjustment vector A [9]. Then A is normalized and multiplied by
an adaptive step-size. Finally, we add A to ?, and begin the next iteration. If A = 0, this
means a possible local minimum is encountered. In this case, we replace A with a stochastically generated vector. Although this is a very simple strategy, our experiments show that
it can effectively prevent the real-time learning from trapping in the local minimums.
One experiment result is shown in Fig. 3. RunBot starts its walking with the parameters
corresponding to point S in Fig. 3A whose speed is 41 cm/s (see Fig. 3C). After 240 seconds
of continuous walking with the learning algorithm and no any human intervention, RunBot
attains a walking speed of about 80 cm/s (see Fig. 3C, corresponding to point F in Fig. 3A),
which is equivalent to 3.5 leg-lengths per second. To compare the walking speed of various
biped robots whose sizes are quite different from each other, we use the relative speed,
speed divided by the leg-length. We know of no other biped robot attaining such a fast
relative speed. The world record of human walking race is equivalent to about 4.0 ? 4.5 leglengths per second. So, RunBot?s highest walking speed is comparable to that of humans.
To get a feeling of how fast RunBot can walk, we strongly encourage readers to watch the
videos of the experiment at, http://www.cn.stir.ac.uk/?tgeng/nips
Although there is no specifically designed controller in charge of the sensing and control
of the transient stages of policy changing (speed changing), the natural dynamics of the
robot itself ensures the stability during the changes. By exploiting the natural dynamics,
the reflexive controller is robust to its parameter variation as shown in Fig. 3A.
5
Discussions
Cruse developed a completely decentralized reflexive controller model to understand the
locomotion control of walking in stick insects (Carausius morosus, [10]), which can immensely decrease the computational burden of the locomotion controller, and has been
applied in many hexapod robots. Up to date, however, no real biped robot has existed that
depends exclusively on reflexive controllers. This may be because of the intrinsic instability specific to biped-walking, which makes the dynamic stability of biped robots much
more difficult to control than that of multi-legged robots. To our knowledge, our RunBot
is the first dynamic biped exclusively controlled by a pure reflexive controller. Although
such a pure reflexive controller itself involves no explicit mechanisms for the global stability control of the biped, its coupling with the properly designed mechanics of RunBot has
substantially ensured the considerably large stable domain of the dynamic biped gaits.
Our reflexive controller has some evident differences from Cruse?s model. Cruse?s model
depends on PEP, AEP and GC (Ground Contact) signals to generate the movement pattern
of the individual legs. Whereas our reflexive controller presented here uses only GC and
AEA signals to coordinate the movements of the joints. Moreover, the AEA signal of one
hip in RunBot only acts on the knee joint belonging to the same leg, not functioning on
the leg-level as the AEP and PEP did in Cruse?s model. The use of fewer phasic feedback
signals has further simplified the controller structure in RunBot.
In order to achieve real time walking gait in a real world, even biological inspired robots
often have to depend on some kinds of position- or trajectory tracking control on their
joints [6, 11, 12]. However, in RunBot, there is no exact position control implemented.
The neural structure of our reflexive controller does not depend on, or ensure the tracking
of, any desired position. Indeed, it is this approximate nature of our reflexive controller
that allows the physical properties of the robot itself to contribute implicitly to generation
of overall gait trajectories. The effectiveness of this hybrid neuro-mechanical system is also
reflected in the fact that real-time learning of parameters was possible, where sometimes
the speed of the robot changes quite strongly (see movie) without tripping it.
References
[1] J. Pratt. Exploiting Inherent Robustness and Natural Dynamics in the Control of
Bipedal Walking Robots. PhD thesis, Massachusetts Institute of Technology, 2000.
[2] B. Surla D. Vukobratovic, M. Borovac and D. Stokic. Biped locomotion: dynamics,
stability, control and application. Springer-Verlag, 1990.
[3] R. Q. V. Van der Linde. Active leg compliance for passive walking. In Proceedings of
IEEE International Conference on Robotics and Automation, Orlando, Florida, 1998.
[4] Steve Collins and Andy Ruina. Efficient bipedal robots based on passive-dynamic
walkers. Science, 37:1082?1085, 2005.
[5] T. Wadden and O. Ekeberg. A neuro-mechanical model of legged locomotion: Single
leg control. Biological Cybernetics, 79:161?173, 1998.
[6] R.D. Beer, R.D. Quinn, H.J. Chiel, and R.E. Ritzmann. Biologically inspired approaches to robotics. Communications of the ACM, 40(3):30?38, 1997.
[7] R.D. Beer and H.J. Chiel. A distributed neural network for hexapod robot locomotion.
Neural Computation, 4:356?365, 1992.
[8] J.C. Gallagher, R.D. Beer, K.S. Espenschied, and R.D. Quinn. Application of evolved
locomotion controllers to a hexapod robot. Robotics and Autonomous Systems, 19:95?
103, 1996.
[9] Nate Kohl and Peter Stone. Policy gradient reinforcement learning for fast
quadrupedal locomotion. In Proceedings of the IEEE International Conference on
Robotics and Automation, volume 3, pages 2619?2624, May 2004.
[10] H. Cruse, T. Kindermann, M. Schumm, and et.al. Walknet - a biologically inspired
network to control six-legged walking. Neural Networks, 11(7-8):1435?1447, 1998.
[11] Y. Fukuoka, H. Kimura, and A.H. Cohen. Adaptive dynamic walking of a quadruped
robot on irregular terrain based on biological concepts. Int. J. of Robotics Research,
22:187?202, 2003.
[12] M.A. Lewis. Certain principles of biomorphic robots. Autonomous Robots, 11:221?
226, 2001.
| 2769 |@word exploitation:1 open:1 ankle:1 eng:1 simplifying:1 locomotive:1 moment:2 initial:1 electronics:1 series:1 score:2 exclusively:2 tuned:1 past:1 com:1 anterior:3 gmail:1 must:1 motor:25 designed:3 half:1 fewer:1 plane:2 gear:1 trapping:1 short:6 record:1 contribute:1 location:1 org:1 sigmoidal:1 rc:1 direct:1 transducer:1 consists:1 combine:1 deteriorate:1 indeed:1 roughly:1 mechanic:1 multi:1 inspired:7 equipped:2 considering:2 begin:3 monosynaptic:1 moreover:1 circuit:5 mass:6 medium:1 evolved:1 kind:3 cm:3 interpreted:1 substantially:1 developed:2 kimura:1 sky:1 every:1 act:1 charge:1 gravity:1 exactly:1 ensured:1 uk:4 control:21 stick:1 intervention:1 extensor:10 appear:3 before:1 positive:3 negligible:1 local:7 hexapod:4 limit:1 installed:1 switching:1 receptor:12 subscript:1 firing:2 potentiometer:1 dynamically:1 fastest:2 range:2 communicated:1 attain:1 thought:1 get:2 impossible:1 instability:1 www:1 equivalent:2 center:5 duration:1 convex:1 formulate:1 knee:12 immediately:1 glasgow:1 pure:5 stability:9 searching:2 population:1 variation:1 coordinate:1 chiel:2 autonomous:2 controlling:1 trigger:1 gm:11 exact:2 us:1 ritzmann:1 locomotion:9 velocity:1 particularly:1 walking:50 located:1 bottom:6 module:2 electrical:1 ensures:1 cycle:4 movement:7 highest:1 decrease:1 servo:3 balanced:1 dynamic:16 legged:3 depend:3 completely:1 easily:1 joint:40 various:2 elec:1 fast:15 quadruped:1 quite:3 whose:3 jointly:1 itself:4 online:2 triggered:1 gait:17 reset:1 loop:1 date:1 achieve:1 exploiting:2 double:2 depending:1 coupling:1 ac:2 pose:1 measured:1 excites:1 implemented:2 involves:1 direction:1 foot:15 radius:2 hull:1 human:8 transient:1 orlando:1 fix:1 biological:3 stretch:12 immensely:1 around:1 ground:14 normal:1 great:1 smallest:1 coordination:1 kindermann:1 clearly:3 sensor:21 always:2 worgott:1 modified:3 rather:1 voltage:4 properly:3 mainly:1 greatly:1 attains:1 sense:1 vl:3 relation:1 going:1 tao:1 overall:1 insect:2 animal:1 special:1 equal:2 construct:1 represents:4 seventy:1 nearly:1 inevitable:1 eighty:1 few:1 inherent:1 composed:1 individual:1 intended:1 phase:8 consisting:1 amplifier:2 evaluation:1 bipedal:2 extreme:4 light:3 pc:1 gla:1 andy:1 edge:1 capable:1 encourage:1 necessary:1 orthogonal:1 walk:5 rotating:1 desired:3 orbit:1 minimal:1 hip:20 increased:1 cover:1 ar:7 stirling:1 reflexive:19 rolling:3 successful:1 gr:7 too:1 considerably:2 international:2 retain:1 tip:1 together:1 linux:1 thesis:1 stochastically:1 potential:1 de:1 attaining:2 stride:3 boom:3 includes:1 coefficient:1 automation:2 int:1 explicitly:1 race:1 ad:1 depends:2 try:1 start:2 recover:1 timely:1 slope:1 contribution:1 stir:1 roll:1 characteristic:2 likewise:1 modelled:3 trajectory:4 researcher:1 drive:2 cybernetics:1 reach:1 whenever:1 energy:1 frequency:1 involved:1 toe:2 naturally:1 associated:1 stop:1 gain:3 massachusetts:1 knowledge:2 positioned:1 exceeded:1 steve:1 higher:1 dt:1 planar:2 response:1 permitted:1 synapse:2 reflected:1 though:2 strongly:2 furthermore:1 just:1 stage:8 angular:3 hand:2 touch:2 facilitate:1 effect:2 excessively:1 normalized:1 concept:1 functioning:1 counterpart:1 swing:4 hence:1 building:1 stance:7 illustrated:2 adjacent:1 during:12 clearance:1 noted:1 criterion:3 m:3 prominent:1 stone:1 evident:1 motion:1 passive:11 percent:2 meaning:1 chaos:1 novel:1 consideration:1 rotation:1 spiking:1 physical:1 cohen:1 volume:1 pep:3 biped:29 centre:1 ottingen:1 moving:1 robot:50 stable:4 etc:1 add:1 florentin:1 own:3 posterior:1 touching:1 driven:1 verlag:1 certain:1 der:1 seen:1 minimum:2 somewhat:1 floor:2 freely:2 converge:1 determine:1 strike:1 nate:1 signal:8 faster:2 long:2 biomechanics:1 dept:2 divided:1 controlled:1 neuro:2 controller:24 iteration:1 sometimes:2 represent:1 achieved:2 cell:1 robotics:5 receive:1 addition:3 whereas:2 irregular:1 interval:1 diagram:1 walker:1 leaving:1 compliance:1 sent:1 effectiveness:1 near:1 bernstein:1 enough:1 pratt:1 psychology:1 fukuoka:1 cn:1 knowing:1 six:1 linde:1 locating:1 aea:4 peter:1 passing:1 action:2 involve:1 tune:2 listed:1 amount:1 concentrated:1 hardware:1 http:1 generate:1 inhibitory:2 sign:1 neuroscience:1 fulfilled:1 per:4 four:2 nevertheless:1 threshold:7 quadrupedal:1 changing:2 prevent:1 backward:1 destroy:1 schumm:1 swimming:1 year:1 angle:8 flexor:8 reader:1 dy:1 comparable:3 distinguish:1 existed:1 encountered:1 kohl:1 adapted:1 strength:1 deficiency:1 constrain:1 flat:3 generates:1 speed:24 passively:3 ekeberg:1 statically:1 uf:2 according:2 belonging:1 membrane:2 remain:1 terminates:1 em:3 heel:2 shallow:1 biologically:4 making:1 leg:21 equation:2 mutually:1 assures:1 trunk:4 mechanism:3 needed:1 know:2 phasic:1 available:1 decentralized:1 permit:1 multiplied:1 actuator:1 limb:1 hierarchical:1 appropriate:1 quinn:2 simulating:1 robustness:1 florida:1 top:6 mam:2 ensure:3 exploit:3 especially:1 contact:10 move:1 strategy:2 exclusive:1 gradient:5 distance:2 simulated:1 equip:1 length:9 illustration:1 ratio:1 difficult:1 executed:1 rise:1 design:5 proper:1 policy:10 allowing:2 neuron:49 fin:1 curved:4 supporting:1 emulating:1 incorporated:1 communication:1 frame:5 gc:2 police:1 bernd:1 mechanical:6 trip:1 connection:4 nip:1 assembled:1 proceeds:1 below:1 usually:1 pattern:1 built:1 including:2 video:1 critical:1 event:1 natural:8 difficulty:2 disturbance:1 hybrid:1 cockroach:1 improve:1 movie:1 technology:2 axis:3 catch:1 text:1 understanding:1 meter:1 powered:1 multiplication:1 relative:5 generation:3 proportional:1 sagittal:2 beer:3 principle:1 excitatory:2 summary:1 gl:8 surprisingly:1 bias:1 deeper:1 understand:1 institute:1 fall:2 biomorphic:1 distributed:2 van:1 feedback:1 stand:1 world:2 preventing:1 inertia:1 porr:1 reinforcement:4 commonly:1 reside:1 forward:3 simplified:2 employing:1 adaptive:3 feeling:1 approximate:1 implicitly:2 deg:4 global:1 active:2 gwdg:1 terrain:1 continuous:1 table:5 nature:2 robust:2 sem:2 aep:3 domain:2 da:1 did:1 whole:4 big:1 motorneurons:1 repeated:1 facilitating:1 neuronal:3 fig:19 board:1 slow:4 vr:3 unactuated:1 position:14 momentum:2 explicit:1 sf:2 governed:1 third:1 minute:1 down:1 specific:1 sensing:2 burden:1 intrinsic:1 sequential:1 effectively:1 phd:1 magnitude:1 gallagher:1 demand:1 margin:2 adjustment:1 tracking:4 watch:1 reflex:19 springer:1 determines:1 lewis:1 acm:1 modulate:2 goal:1 replace:1 hard:1 change:4 determined:2 except:2 specifically:1 total:1 e:17 support:2 rotate:1 collins:1 evaluate:1 |
1,948 | 277 | 668
Dembo, Siu and Kailath
Complexity of Finite Precision
Neural Network Classifier
Amir Dembo 1
Inform. Systems Lab.
Stanford University
Stanford, Calif. 94305
Kai-Yeung Siu
Inform. Systems Lab.
Stanford University
Stanford, Calif. 94305
Thomas Kailath
Inform. Systems Lab .
Stanford University
Stanford, Calif. 94305
ABSTRACT
A rigorous analysis on the finite precision computational <)Spects of
neural network as a pattern classifier via a probabilistic approach
is presented. Even though there exist negative results on the capability of perceptron, we show the following positive results: Given
n pattern vectors each represented by en bits where e > 1, that are
uniformly distributed, with high probability the perceptron can
perform all possible binary classifications of the patterns. Moreover, the resulting neural network requires a vanishingly small proportion O(log n/n) of the memory that would be required for complete storage of the patterns. Further, the perceptron algorithm
takes O(n2) arithmetic operations with high probability, whereas
other methods such as linear programming takes O(n 3 .5 ) in the
worst case. We also indicate some mathematical connections with
VLSI circuit testing and the theory of random matrices.
1
Introduction
It is well known that the percept ron algorithm can be used to find the appropriate
parameters in a linear threshold device for pattern classification, provided the pattern vectors are linearly separable. Since the number of parameters in a perceptron
is significantly fewer than that needed to store the whole data set, it is tempting to
1 The
coauthor is now with the Mathematics and Statistics Department of Stanford University.
Complexity of Finite Precision Neural Network Classifier
conclude that when the patterns are linearly separable, the perceptron can achieve
a reduction in storage complexity. However, Minsky and Papert [1] have shown
an example in which both the learning time and the parameters increase exponentially, when the perceptron would need much more storage than does the whole list
of patterns.
Ways around such examples can be explored by noting that analysis that assumes
real arithmetic and disregards finite precision aspects might yield misleading results.
For example, we present below a simple network with one real valued weight that
can simulate all possible classifications of n real valued patterns into k classes,
when unlimited accuracy and continuous distribution of the patterns are assumed.
For simplicity, let us assume the patterns are real numbers in [0,1]. Consider the
following sequence {xi,i} generated by each pattern Xi for i = 1, ... , n:
Xi,i
=
=
Xi,l k? Xi modk
k . xi,i-l mod k lor j > 1
U(Xi,j) = [xi,i)
where [] denotes the integer part.
Let I: {Xl, ... , Xn} --+ {O, ... , k-l} denote the desired classification of the patterns.
It is easy to see that for any continuous distribution on [0,1], there exists a j such
that U(Xi,j)
I(xi), with probability one. So, the network y
u(x,w) may
simulate any classification with w = j determined from the desired classification as
shown above.
=
=
So in this paper, we emphasize the finite precision computational aspects of pattern
classification problems and provide partial answers to the following questions:
? Can the perceptron be used as an efficient form of memory'?
? Does the 'learning' time of perceptron become too long to be practical most of
the time even when the patterns are assumed to be linearly separable '?
? How do the convergence results compare to those obtained by solving system
of linear inequalities'?
We attempt to answer the above questions by using a probabilistic approach. The
theorems will be presented without proofs; details of the proof will appear in a
complete paper. In the following analysis, the phrase 'with high probability' means
the probability of the underlying event goes to 1 as the number of patterns goes to
669
670
Dembo, Siu and Kailath
infinity. First, we shall introduce the classical model of a perceptron in more details
and give some known results on its limitation as a pattern classifier.
2
The Perceptron
A perceptron is a linear threshold device which computes a linear combination of
the coordinates of the pattern vector, compares the value with a threshold and
outputs +1 or -1 if the value is larger or smaller than the threshold respectively.
More formally, we have
Output:
d
sign{ <
w, i
> -8} = sign{L Xi
. Wi -
8}
i=l
Input:
Parameters:
weights
8ER
threshold
sign{y}
= { ~~
if y ~ 0
otherwise
Given m patterns xi, ... ,x~ in Rd, there are 2m possible ways of classifying each
of the patterns to ? 1. When a desired classification of the patterns is achieveable
by a perceptron, the patterns are said to be linearly separable. Rosenblatt(1962)
[2] showed that if the patterns are linearly separable, then there is a 'learning'
algorithm which he called perceptron learning algorithm to find the appropriate parameters wand 8. Let CTi = ?1 be the desired classification of the pattern xi. Also,
let Yi = CTi ? xi. The perceptron learning algorithm runs as follows:
1.
2.
3.
4.
Set k = 1, choose an initial value of w( k) ? O.
Select an i E {I, ... , n}, set Y(k) = yi.
If w( k) . y( k) ~ 0, goto 2. Else
Set w(k + 1) w(k) + Y(k), k k + 1, go to 2.
=
=
Complexity of Finite Precision Neural Network Classifier
The algorithm terminates when step 3 is true for all Yi. If the patterns are linearly separable, then the above perceptron algorithm is guaranteed to converge in
finitely many iterations, i.e. Step 4 would be reached only finitely often.
The existence of such simple and elegant 'learning' algorithm had brought a great
deal of interests during the 60's. However, the capability of the perceptron is very
limited since only a small portion of the 2m possible binary classifications can be
achieved. In fact, Cover(1965) [3] has shown that a perceptron can at most classify
the patterns into
2
I:-
d 1
(
m- 1
I
)
= O(m d -
1)
i=O
different ways out of the 2m possibilities.
The above upper bound O( m d - 1 ) is achieved when the pattern vectors are in general
position i.e. every subset of d vectors in {xi, ... , x~} are linearly independent. An
immediate generalization of this result is the following:
Theorem 1 For any function f( w, i) which lies in a function space of dimension
r, i. e. if we can write
f(w,i) =
al (w)!t (i)
+... + ar(w)fr(i)
then the number of possible classifications of m patterns by sign{f(w,
by O(mr-l)
3
in is bounded
A New Look at the Perceptron
The reason why perceptron is so limited in its capability as a pattern classifier is
that the dimension of the pattern vector space is kept fixed while the number of
patterns is increased. We consider the binary expansion of each coordinate and view
the real pattern vector as a binary vector, but in a much higher dimensional space.
The intuition behind this is that we are now making use of every bit of information
in the pattern. Let us assume that each pattern vector has dimension d and that
each coordinate is given with m bits of accuracy, which grows with the number of
patterns n in such a way that d? m = c? n for some c > 1. By considering the binary
expansion, we can treat the patterns as binary vectors, i.e. each vector belongs to
{+l,-lyn. If we want to classify the patterns into k classes, we can use logk
number of binary classifiers, each classifying the patterns into the corresponding bit
of the binary encoding of the k classes. So without loss of generality, we assume
that the number of classes equals 2. Now the classification problem can be viewed
as an implementation of a partial Boolean function whose value is only specified on
671
672
Dem bo, Siu and Kailath
n inputs out of the 2cn possible ones. For arbitrary input patterns, there does not
seem to exist an efficient way other than complete storage of the patterns and the
use of a look-up table for classification, which will require O(n2) bits. It is natural
to ask if this is the best we can do. Surprisingly, using probabilistic method in
combinatorics [4] (counting arguments), we can show the following:
Theorem 2 For n sufficiently large, there exists a system that can simulate all
possible binary classifications with parameter storage of n + 2 log n bits.
Moreover, a recent result from the theory of VLSI testing [5], implies that at least
n + log n bits are needed . As the proof of theorem 1 is non-constructive, both
the learning of the parameters and the retrieval of the desired classification in the
'optimal' system may be too complex for any practical purpose. Besides, since
there is almost no redundancy in the storage of parameters in such an 'optimal'
system, there will be no 'generalization' properties. i.e. It is difficult to predict
what the output of the system would be on patterns that are not trained. However,
a perceptron classifier, while sub-optimal in terms of Theorem 3 below, requires
only O(n log n) bits for parameter storage, compared with O(n 2 ) bits for a table
look up classifier. In addition, it will exhibit 'generalization' properties in the sense
that new patterns that are close in Hamming distance to those trained patterns are
likely to be classified into the same class. So, if we allow some vanishingly small
probability of error, we can give an affirmative answer to the first question raised
at the beginning:
Theorem 3 Assume the n pattern vectors are uniformly distributed over {+1, _1}cn,
then with high probability, the patterns can be classified into a1l2n possible ways using perceptron algorithm. Further, the storage of parameters requires only O( n log n)
bits.
In other words, when the input patterns are given with high precision, perceptron
can be used as an efficient form of memory.
The known upper bound on the learning time of percept ron depends on the maximum length of the input pattern vectors, and the minimum distance fJ of the
pattern vectors to a separating hyperplane . In the following analysis, our probabilistic assumption guarantees the pattern vectors to be linearly independent with
high probability and thus linearly separable. In order to give an probabilistic upper
bound on the learning time of the perceptron, we first give a lower bound on the
minimum distance fJ with high probability:
Lemma 1 Let n be the number of pattern vectors each in Rm, where m = (1 + f)n
and f is any constant> O. Assume the entries of each vector v are iid random
variables with zero mean and bounded second moment. Then with probability --+ 1
Complexity or Finite Precision Neural Network Classifier
as n --+ 00 , there exists a separating hyperplane and a 15*
is at a distance of at least 15* from it.
> 0 such that each vector
In our case, each coordinate of the patterns is assumed to be equally likely ?1
and clearly the conditions in the above lemma are satisfied. In general, when the
dimension of the pattern vectors is larger than and increases linearly with the number of patterns, the above theorem applies provided the patterns are given with
high enough precision that a continuous distribution is a sufficiently good model for
analysis.
The above lemma makes use of a famous conjecture from the theory of random
matrices [6] which gives a lower bound on the minimum singular value of a random
matrix. We actually proved the conjecture during our course of study, which states
which states that the minimum singular value of a en by n random matrix with
c> 1, grows as Fn almost surely.
Theorem 4 Let An be a en X n random matrix with c > 1, whose entries are i. i. d.
entries with zero mean and bounded second moment, 0'"(-) denote the minimum singular value of a matrix. Then there exists f3 > 0 such that
lim inf u( A~)
n-oo
yn
> f3
with probability 1.
Note that our probabilistic assumption on the patterns includes a wide class of distributions, in particular the zero mean normal and symmetric uniform distribution
on a bounded interval. In addition, they satisfy the following condition:
(*) There exists a a> 0 such that P{[v[
> aFn}
--+
0 as n
--+ 00.
Before we answer the last two questions raised at the beginning, we state the following known result on the perceptron algorithm as a second lemma:
Lemma 2 Suppose there exists a unit vector w* such that w* . v > 15 for some
15 > 0 and for all pattern vectors v. Then the perceptron algorithm will converge to
a solution vector in ::::; N2 /152 number of iterations, where N is the maximum length
of the pattern vectors.
Now we are ready to state the following
Theorem 5 Suppose the patterns satisfy the probabilistic assumptions stated in
673
674
Dembo, Siu and Kailath
Lemma 1 and the condition (*), then with high probability, the perceptron takes
O( n 2 ) arithmetic operations to terminate.
As mentioned earlier, another way of finding a separating hyperplane is to solve
a system of linear inequalities using linear programming, which requires O( n 3 .S )
arithmetic operations [7] . Under our probabilistic assumptions, the patterns are
linearly independent with high probability, so that we can actually solve a system
of linear equations. However, this still requires O(n 3 ) arithmetic operations. Further, these methods require batch processing in the sense that all patterns have to
be stored in advance in order to find the desired parameters, in constrast to the
sequential 'learning' nature of the perceptron algorithm. So for training this neural
network classifier, perceptron algorithm seems more preferable.
When the number of patterns is polynomial in the total number of bits representing
each pattern, we may first extend each vector to a dimension at least as large as
the number of patterns, and then apply the perceptron to compress the storage of
parameters. One way of adding these extra bits is to form products of the coordinates within each pattern. Note that by doing so, the coordinates of each pattern
are pairwise independent. We conjecture that theorem 3 still applies, implying even
more reduction in storage requirements. Simulation results strongly support our
conjecture.
4
Conclusion
In this paper, the finite precision computational aspects of pattern classification
problems are emphasized. We show that the perceptron, in contrast to common belief, can be quite efficient as a pattern classifier, provided the patterns are given with
high enough precision. Using a probabilistic approach, we show that the perceptron algorithm can even outperform linear programming under certain conditions.
During the course of this work, we also discovered some mathematical connections
with VLSI circuit testing and the theory of random matrices. In particular, we
have proved an open conjecture regarding the minimum singular value of a random
matrix.
Acknowledgements
This work was supported in part by the Joint Services Program at Stanford University (US Army, US Navy, US Air Force) under Contract DAAL03-88-C-OOll,
and NASA Headquarters, Center for Aeronautics and Space Information Sciences
(CASIS) under Grant NAGW-419-S5.
Complexity or Finite Precision Neural Network Classifier
References
[1] M. Minsky and S. Papert, Perceptrons, The MIT Press, expanded edition, 1988.
[2] F. Rosenblatt, Principles of Neurodynamics, Spartan Books, New York, 1962.
[3] T. M. Cover, "Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition", IEEE Trans. on Electronic
Computers, EC-14:326-34, 1965.
[4] P. Erdos and J. Spencer, Probabilistic Methods in Combinatorics, Academic
Press/ Akademiai Kiado, New York-Budapest, 1974.
[5] G. Seroussi and N. Bshouty, "Vector Sets for Exhaustive Testing of Logic Circuits", IEEE Trans. Inform. Theory, IT-34:513-522, 1988.
[6] J. Cohen, H. Kesten and C. Newman, editor, Random Matrices and Their Applications, volume 50 of Contemporary Mathematics, American Mathematical
Society, 1986.
[7] N. Karmarkar, "A New Polynomial-Time Algorithm for Linear Programming",
Combinatorica 1, pages 373-395, 1984.
675
| 277 |@word polynomial:2 proportion:1 seems:1 open:1 simulation:1 moment:2 reduction:2 initial:1 fn:1 implying:1 fewer:1 device:2 afn:1 amir:1 dembo:4 beginning:2 ron:2 lor:1 mathematical:3 become:1 introduce:1 pairwise:1 considering:1 provided:3 moreover:2 underlying:1 circuit:3 bounded:4 what:1 affirmative:1 finding:1 guarantee:1 every:2 preferable:1 classifier:13 rm:1 unit:1 grant:1 appear:1 yn:1 positive:1 before:1 service:1 treat:1 encoding:1 might:1 limited:2 practical:2 testing:4 significantly:1 word:1 close:1 storage:10 center:1 go:3 simplicity:1 constrast:1 coordinate:6 suppose:2 programming:4 recognition:1 worst:1 daal03:1 contemporary:1 mentioned:1 intuition:1 complexity:6 trained:2 solving:1 joint:1 represented:1 spartan:1 newman:1 exhaustive:1 navy:1 whose:2 quite:1 stanford:8 kai:1 valued:2 larger:2 solve:2 otherwise:1 statistic:1 sequence:1 vanishingly:2 product:1 fr:1 budapest:1 achieve:1 convergence:1 requirement:1 oo:1 seroussi:1 finitely:2 bshouty:1 indicate:1 implies:1 require:2 generalization:3 spencer:1 around:1 sufficiently:2 normal:1 great:1 predict:1 purpose:1 brought:1 clearly:1 mit:1 contrast:1 rigorous:1 sense:2 vlsi:3 headquarters:1 classification:16 raised:2 equal:1 f3:2 look:3 minsky:2 attempt:1 interest:1 possibility:1 behind:1 partial:2 calif:3 desired:6 increased:1 classify:2 earlier:1 boolean:1 cover:2 ar:1 phrase:1 subset:1 entry:3 uniform:1 siu:5 too:2 stored:1 answer:4 probabilistic:10 contract:1 satisfied:1 choose:1 book:1 american:1 dem:1 includes:1 satisfy:2 combinatorics:2 depends:1 view:1 lab:3 doing:1 portion:1 reached:1 capability:3 air:1 accuracy:2 percept:2 yield:1 famous:1 iid:1 classified:2 inform:4 modk:1 proof:3 hamming:1 proved:2 ask:1 lim:1 actually:2 nasa:1 higher:1 though:1 strongly:1 generality:1 grows:2 true:1 symmetric:1 deal:1 during:3 complete:3 fj:2 geometrical:1 common:1 cohen:1 exponentially:1 volume:1 extend:1 he:1 s5:1 rd:1 mathematics:2 had:1 aeronautics:1 showed:1 recent:1 belongs:1 inf:1 store:1 certain:1 inequality:3 binary:9 yi:3 minimum:6 mr:1 surely:1 converge:2 tempting:1 arithmetic:5 academic:1 long:1 retrieval:1 equally:1 yeung:1 coauthor:1 iteration:2 achieved:2 whereas:1 want:1 addition:2 interval:1 else:1 singular:4 extra:1 goto:1 elegant:1 mod:1 seem:1 integer:1 noting:1 counting:1 easy:1 enough:2 regarding:1 cn:2 kesten:1 york:2 outperform:1 exist:2 sign:4 rosenblatt:2 write:1 shall:1 redundancy:1 threshold:5 kept:1 wand:1 run:1 almost:2 electronic:1 bit:12 bound:5 guaranteed:1 infinity:1 unlimited:1 aspect:3 simulate:3 argument:1 separable:7 expanded:1 conjecture:5 department:1 combination:1 smaller:1 terminates:1 wi:1 making:1 equation:1 needed:2 operation:4 apply:1 appropriate:2 batch:1 existence:1 thomas:1 compress:1 assumes:1 denotes:1 classical:1 society:1 question:4 said:1 exhibit:1 distance:4 separating:3 reason:1 besides:1 length:2 difficult:1 negative:1 stated:1 implementation:1 perform:1 upper:3 finite:9 immediate:1 lyn:1 discovered:1 arbitrary:1 required:1 specified:1 connection:2 trans:2 below:2 pattern:69 program:1 memory:3 belief:1 event:1 natural:1 force:1 representing:1 misleading:1 ready:1 acknowledgement:1 loss:1 limitation:1 principle:1 editor:1 classifying:2 course:2 surprisingly:1 last:1 supported:1 allow:1 perceptron:31 wide:1 distributed:2 dimension:5 xn:1 computes:1 ec:1 emphasize:1 erdos:1 logic:1 conclude:1 assumed:3 xi:15 continuous:3 why:1 table:2 neurodynamics:1 terminate:1 nature:1 expansion:2 complex:1 linearly:11 whole:2 edition:1 n2:3 akademiai:1 en:3 precision:12 papert:2 position:1 sub:1 xl:1 lie:1 theorem:10 emphasized:1 achieveable:1 er:1 list:1 explored:1 exists:6 sequential:1 adding:1 logk:1 likely:2 army:1 bo:1 applies:2 cti:2 kailath:5 viewed:1 determined:1 uniformly:2 hyperplane:3 lemma:6 called:1 total:1 disregard:1 perceptrons:1 formally:1 select:1 combinatorica:1 support:1 constructive:1 kiado:1 karmarkar:1 |
1,949 | 2,770 | Affine Structure From Sound
Sebastian Thrun
Stanford AI Lab
Stanford University, Stanford, CA 94305
Email: [email protected]
Abstract
We consider the problem of localizing a set of microphones together
with a set of external acoustic events (e.g., hand claps), emitted at unknown times and unknown locations. We propose a solution that approximates this problem under a far field approximation defined in the
calculus of affine geometry, and that relies on singular value decomposition (SVD) to recover the affine structure of the problem. We then
define low-dimensional optimization techniques for embedding the solution into Euclidean geometry, and further techniques for recovering the
locations and emission times of the acoustic events. The approach is useful for the calibration of ad-hoc microphone arrays and sensor networks.
1
Introduction
Consider a set of acoustic sensors (microphones) for detecting acoustic events in the environment (e.g., a hand clap). The structure from sound (SFS) problem addresses the problem of simultaneously localizing a set of N sensors and a set of M external acoustic events,
whose locations and emission times are unknown.
The SFS problem is relevant to the spatial calibration problem for microphone arrays.
Classically, microphone arrays are mounted on fixed brackets of known dimensions; hence
there is no spatial calibration problem. Ad-hoc microphone arrays, however, involve a person placing microphones at arbitrary locations with limited knowledge as to where they
are. Today?s best practice requires a person to measure the distance between the microphones by hand, and to apply algorithms such as multi-dimensional scaling (MDS) [1] for
recovering their locations. When sensor networks are deployed from the air [4], manual calibration may not be an option. Some techniques rely on GPS receivers [8]. Others require
a capability to emit and sense wireless radio signals [5] or sounds [9, 10], which are then
used to estimate relative distances between microphones (directly or indirectly, as in [9]).
Unfortunately, wireless signal strength is a poor estimator of range, and active acoustic
and GPS localization techniques are uneconomical in that they consume energy and require additional hardware. In contrast, SFS relies on environmental acoustic events such
as hand claps, which are not generated by the sensor network. The general SFS problem
was previously treated in [2] under the name passive localization. A related paper [3] describes a technique for incrementally localizing a microphone relative to a well-calibrated
microphone array through external sound events.
In this paper, the structure from sound (SFS) problem is defined as the simultaneous localization problem of N sound sensors and M acoustic events in the environment detected
by these sensors. Each event occurs at an unknown time and an unknown location. The
sensors are able to measure the detection times of the event. We assume that the clocks
of the sensors are synchronized (see [6]); that events are spaced sufficiently far apart in
time to make the association between different sensors unambiguous; and we also assume
absence of sound reverberation. For the ease of representation, the paper assumes a 2D
world; although the technique is easily generalized to 3D.
Under the assumption of independent and identically distributed (iid) Gaussian noise,
the SFS problem can be formulated as a least squares problem in a space over three types of
variables: the locations of the microphones, the locations of the acoustic events, and their
emission times. However, this least squares problem is plagued by local minima, and the
number of constraints is quite large.
The gist of this paper transforms this optimization problem into a sequence of simpler
problems, some of which can be solved optimally, without the danger of getting stuck in
local minima. The key transformation involves a far field approximation, which presupposes that the sound sources are relatively far away from the sensors. This approximation
reformulates the problem as one of recovering the incident angle of the acoustic signal,
which is the same for all sensors for any fixed acoustic event. The resulting optimization
problem is still non-linear; however, by relaxing the laws of Euclidean geometry into the
more general calculus of affine geometry, the optimization problem can be solved by singular value decomposition (SVD). The resulting solution is mapped back into Euclidean
space by optimizing a matrix of size 2 ? 2, which is easily carried out using gradient descent. A subsequent non-linear optimization step overcomes the far field approximation
and enables the algorithm to recover locations and emission times of the defining acoustic
events. Experimental results illustrate that our approach reliably solves hard SFS problems
where gradient-based techniques consistently fail.
Our approach is similar in spirit to the affine solution to the structure from motion
(SFM) problem proposed by a seminal paper by Tomasi&Kanade [11], which was later
extended to the non-orthographic case [7]. Like us, these authors expressed the structure
finding problem using affine geometry, and applied SVD for solving it. SFM is of course
defined for cameras, not for microphone arrays. Camera measure angles, whereas microphones measure range. This paper establishes an affine solution to the structure from sound
problem that tends to work well in practice.
2
Problem Definition
2.1 Setup
We are given N sensors (microphones) located in a 2D plane. We shall denote the location
of the i-th sensor by (xi yi ), which defined the following sensor location matrix of size
N ? 2:
?
?
X
=
?
?
?
x1
x2
..
.
xN
y1
y2 ?
.. ?
?
.
yN
(1)
We assume that the sensor array detects M acoustic events. Each event has as unknown coordinate and an unknown emission time. The coordinate of the j-th event shall be denoted
(aj bj ), providing us with the event location matrix A of size M ? 2. The emission time
of the j-th acoustic event is denoted tj , resulting in the vector T of length M :
?
?
?
?
a1
A
=
? a2
? .
? ..
aM
b1
b2 ?
.. ?
?
.
bM
?
T = ?
?
t1
t2 ?
.. ?
?
.
tM
(2)
X, A, and T , comprise the set of unknown variables. In problems such as sensor calibration, only X is of interest. In general SFS applications, A and T might also be of interest.
2.2 Measurement Data
In SFS, the variables X, A, and T are recovered from data. The data establishes the
detection times of the acoustic events by the individual sensors. Specifically, the data
matrix is of the form:
?
?
d1,1
D
=
? d2,1
? .
? ..
dN,1
d1,2
d2,2
..
.
dN,2
???
???
..
.
???
d1,M
d2,M ?
?
..
?
.
dN,M
(3)
Here each di,j denotes the detection time of acoustical event j by sensor i. Notice that we
assume that there is no data association problem. Even if all acoustic events sound alike,
the correspondence between different detections is easily established as long as there exists
sufficiently long time gaps between any two sound events.
The matrix D is a random field induced by the laws of sound propagation (without reverberation). In the absence of measurement noise, each di,j is the sum of the corresponding emission time tj , plus the time it takes for sound to travel from (aj bj ) to (xi yi ):
xi
aj
?1
di,j = tj + c
?
(4)
yi
bj
Here | ? | denotes the L2 norm (Euclidean distance), and c denoted the speed of sound.
2.3 Relative Formulation
Obviously, we cannot recover the global coordinates of the sensors. Hence, without loss of
generality, we define the first sensor?s location as x1 = y1 = 0. This gives us the relative
location matrix for the sensors:
?
?
x2 ? x 1
?
X
=
? x3 ? x 1
?
..
?
.
xN ? x 1
y2 ? y 1
y3 ? y 1 ?
?
..
?
.
yN ? y 1
(5)
This relative sensor location matrix is of dimension (N ? 1) ? 2.
It shall prove convenient to subtract from the arrival time di,j the arrival time d1,j
measured by the first sensor i = 1. This relative arrival time is defined as ?i,j := di,j ?
d1,j . In the relative arrival time, the absolute emission times tj cancel out:
xi
aj
aj
?1
?1
?i,j = tj + c
?
? t j ? c bj
yi
bj
xi
aj aj
?1
= c
?
(6)
yi
? bj
bj
We now define the matrix of relative arrival times:
?
?
=
d2,1 ? d1,1
? d3,1 ? d1,1
?
..
?
.
dN,1 ? d1,1
d2,2 ? d1,2
d3,2 ? d1,2
..
.
dN,2 ? d1,2
This matrix ? is of dimension (N ? 1) ? M .
???
???
..
.
???
?
d2,M ? d1,M
d3,M ? d1,M ?
?
..
?
.
dN,M ? d1,M
(7)
2.4 Least Squares Formulation
The relative sensor locations X and the corresponding locations of the acoustic events
A can now be recovered through the following least squares problem. This optimization
seeks to identify X and A so as to minimize the quadratic difference between the predicted
relative measurements and the actual measurements.
2
N X
M
X
xi
aj aj
?
?
hA , X i = argmin
?
(8)
yi
? bj ? ?i,j
bj
X,A
i=2 j=1
The minimum of this expression is a maximum likelihood solution for the SFS problem
under the assumption of iid Gaussian measurement noise.
If emission times are of interest, they are now easily recovered by the following
weighted mean:
N
xi
1 X
aj
?
T
=
di,j ? c
?
(9)
yi
bj
N
i=1
The minimum of Eq. 8 is not unique. This is because any solution can be rotated around
the origin of the coordinate system, and mirrored through any axis intersecting the origin.
This shall not concern us, as we shall be content with any solution of Eq. 8; others are then
easily generated.
What is of concern, however, is the fact that minimizing Eq. 8 is difficult. A straw
man algorithm?which tends to work poorly in practice?involves starting with random
guesses for X and A and then adjusting them in the direction of the negative gradient until
convergence. As we shall show experimentally, such gradient algorithms work poorly in
practice because of the large number of local minima.
3
The Far Field Approximation
The essence of our approximation pertains to the fact that for far range acoustic events?
i.e., events that are (infinitely) far away from the sensor array?the incoming sound wave
hits each sensor at the same incident angle. Put differently, the rays connecting the location
of an acoustic event (aj bj ) with each of the perceiving sensors (xi yi ) are approximately
parallel for all i (but not for all j!). Under the far field approximation, these incident angles
are entirely parallel. Thus, all that matters are the incident angle of the acoustic events.
To derive an equation for this case, it shall prove convenient to write the Euclidean
distance between a sensor and an acoustic event as a function of the incident angle ?. This
angle is given by the four-quadrant extension of the arctan function:
?i,j
=
arctan2
bj ? y i
aj ? x i
(10)
The Euclidean distance between (aj bj ) and (xi yi ) can now be written as
xi
aj
aj ? x i
?
=
(cos
?
sin
?
)
i,j
i,j
yi
bj
bj ? y i
(11)
For far-away points (aj bj ), we can safely assume that all incident angles for the j-th
acoustic event are identical:
?j
:=
?1,j = ?2,j = . . . = ?N,j
(12)
Hence we substitute ?j for ?i,j in Eq. 11. Plugging this back into Eq. 6, this gives us the
following expression for ?i,j :
xi
aj aj
?1
?i,j = c
?
?
yi
bj
bj
?
c?1
=
c?1 (cos ?j sin ?j )
aj ? x i
bj ? y i
(cos ?j sin ?j )
xi
yi
?
aj
bj
(13)
This leads to the following non-linear least squares problem for the desired sensor locations:
2
cos ?1 cos ?2 ? ? ? cos ?M
?
?
?
hX , ?1 , . . . , ?M i =
argmin X
? ?
(14)
sin
?
sin
?
?
?
?
sin
?
1
2
M
X,?1 ,...,?M
The reader many notice that in this formulation of the SFS problem, the locations of the
sound events (aj , bj ) have been replaced by ?j , the incident angles of the sound waves.
One might think of this as the ?ortho-acoustic? model of sound propagation (in analogy
to the orthographic camera model in computer vision). The ortho-acoustic projection reduces the number of variables in the optimization. However, the argument in the quadratic
expression is still non-linear, due to the non-linear trigonometric functions involved.
4
Affine Solution for the Sensor Locations
Eq. 14 is trivially solvable in the space of affine geometry. Following [11], in affine geometry projections can be arbitrary linear functions, not just rotations and translations.
Specifically, let us replace the specialized matrix
cos ?1
sin ?1
cos ?2
sin ?2
???
???
cos ?M
sin ?M
(15)
by a general 2 ? M matrix of the form
?
?1,1
?2,1
=
?1,2
?2,2
???
???
?1,M
?2,M
(16)
This leads to the least squares problem
hX ? , ?? i
=
argmin |X? ? ?|2
(17)
X,?
In the noise free-case case, we know that there must exist a X and a ? for which X? = ?.
This suggests that the rank of ? should be 2, since it is the product of a matrix of size
(N ? 1) ? 2 and a matrix of size 2 ? M .
Further, we can recover both X and ? via singular value decomposition (SVD). Specifically, we know that the matrix ? can be decomposed as into three other matrices, U , V ,
and W :
UV W T
=
svd(?)
(18)
where U is a matrix of size (N ? 1) ? 2, V a diagonal matrix of eigenvalues of size 2 ? 2,
and W a matrix of size M ? 2. In practice, ? might be of higher rank because of noise or
because of violations of the far field assumption, but it suffices to restrict the consideration
to the first two eigenvalues.
The decomposition in Eq. 18 leads to the optimal affine solution of the SFS problem:
X
=
UV
? = WT
and
(19)
However, this solution is not yet Euclidean, since ? might not be of the form of Eq. 15.
Specifically, Eq. 15 is a function of angles, and each row in Eq. 15 must be of the form
cos2 ?j + sin2 ?j = 1. Clearly, this constraint is not enforced in the SVD.
However, there is an easy ?trick? for recovering a X and ? for which this constraint is
at least approximately met. The key insight is that for any invertible 2 ? 2 matrix C,
X0
U V C ?1
=
and
?0 = CW T
(20)
is equally a solution to the factorization problem in Eq.18. This is because
X 0 ?0
=
U V C ?1 CW T = U V W T = X?
(21)
The remaining search problem, thus, is the problem of finding an appropriate matrix C for
which ?0 is of the form of Eq. 15. This is a non-linear optimization problem, but it is much
lower-dimensional than the original SFS problem (it only involves 4 parameters!).
Specifically, we seek a C for which ?0 = CW T minimizes
2
(22)
C ? = argmin (1 1) (?0 ? ?0 ) ? (1 1 ? ? ? 1)
{z
}
|
C
(?)
Here ??? denotes the dot product. The expression labeled (?) evaluates to a vector of expressions of the form
2
2
2
2
2
2
(?1,1
+ ?2,1
?1,2
+ ?2,2
? ? ? ?1,M
+ ?2,M
)
(23)
2
(a) Error
3
2.5
grad. desc.
@
R
@
2
1.5
1
SVD
0.5
?
0
4
6
SVD+grad. desc.
?
8
10
N, M (here N=M)
12
14
log?error (95% confidence intervals)
error (95% confidence intervals)
3.5
(b) Log-error
1
6
0
?1
SVD
?2
?
grad. desc.
?3
6 desc.
SVD+grad.
?4
?5
4
6
8
10
N, M (here N=M)
12
14
Figure 1: (a) Error and (b) log error for three different algorithms: gradient descent (red), SVD
(blue), and SVD followed by gradient descent (green). Performance is shown for different values of
N and M , with N = M . The plot also shows 95% confidence bars.
(a) ground truth
(b) gradient descent
(c) SVD
(d) SVD + grad. desc.
sensors
acoustic events
Figure 2: Typical SFS results for a simulated array of nine microphones spaced in a regular grid,
surrounded by 9 sounds arranged on a circle. (a) Ground truth; (b) Result of plain gradient descent
after convergence; the dashed lines visualize the residual error; (c) Result of the SVD with sound
directions as indicated; and (d) Result of gradient descent initialized with our SVD result.
The minimization in Eq. 22 is carried out through standard gradient descent. It involves
only 4 variables (C is of the size 2 ? 2), and each single iteration is linear in O(N + M )
(instead of the O(N M ) constraints that define Eq. 8). In (tens of thousands of) experiments
with synthetic noise-free data, we find empirically that gradient descent reliably converges
to the globally optimal solution.
5
Recovering the Acoustic Event Locations and Emission Times
With regards to the acoustic events, the optimization for the far field case only yields the
incident angles. In the near field setting, in which the incident angles tend to differ for
different sensors, it may be desirable to recover the locations A of the acoustic event and
the corresponding emission times T .
To determine these variables, we use the vector X ? from the far field case as mere
starting points in a subsequent gradient search. The event location matrix A is initialized
by selecting points sufficiently far away along the estimated incident angle for the far field
approximation to be sound:
A
0?
=
?
k ?0?
T
(24)
?
Here ? = C W with C defined in Eq. 22, and k is a multiple of the diameter of the
locations in X. With this initial guess for A, we apply gradient descent to optimize Eq. 8,
and finally use Eq. 9 to recover T .
6
Experimental Results
We ran a series of simulation experiments to characterize the quality of our algorithm,
especially in comparison with the obvious nonlinear least squares problem (Eq. 8) from
which it is derived. Fig. 1 graphs the residual error as a function of the number of sensors
2
(a) Error
3
2.5
2
grad. desc.
@
R
@
1.5
SVD
1
0.5
0
0
SVD+grad. desc.
2
4
6
8
diameter ratio of events vs sensor array
10
log?error (95% confidence intervals)
error (95% confidence intervals)
3.5
(b) Log-error
1
grad. desc.
0
?1
SVD
?2
?3
SVD+grad. desc.
?4
0
2
4
6
8
diameter ratio of events vs sensor array
10
Figure 3: (a) Error and (b) log-error for three different algorithms (gradient descent in red, SVD
in blue, and SVD followed by gradient descent in green), graphed here for varying distances of the
sound events to the sensor array. An error above 2 means the reconstruction has entirely failed. All
diagrams also show the 95% confidence intervals, and we set N = M = 10.
(a) One of our motes
used to generate the data
(b) Optimal vs. hand-measured
m
o
t
e
s
(c) Result of gradient descent
(d) SVD and GD
sounds
motes
Figure 4: Results using our seven sensor motes as the sensor array, and a seventh mote to generate
sound events. (a) A mote; (b) the globally optimal solution (big circles) compared to the handmeasures locations (small circles); (c) a typical result of vanilla gradient descent; and (d) the result
of our approach, all compared to the optimal solution given the (noisy) data.
N and acoustic events M (here N = M ). Panel (a) plots the regular error along with
95% confidence intervals, and panel (b) the corresponding log-error. Clearly, as N and M
increase, plain gradient descent tends to diverge, whereas our approach converges. Each
data point in these graphs was obtained by averaging 1,000 random configurations, in which
sensors were sampled uniformly within an interval of 1?1m; sounds were placed at varying
ranges, from 2m to 10m. An example outcome (for a non-random configuration!) is shown
in Fig. 2. This figure plots (a) a simulated sensor array consisting of 9 sensors with 9 sound
sources arranged in a circle; and (b)-(d) the resulting reconstructions of our three methods.
For the SVD result shown in (c), only the directions of the incoming sounds are shown.
An interesting question pertains to the effect of the far field approximation in cases
where it is clearly violated. To examine the robustness of our approach, we ran a series of
experiments in which we varied the diameter of the acoustic events relative to the diameter
of the sensors. If this parameter is 1, the acoustic events are emitted in the same region as
the microphones; for values such as 10, the events are far away.
Fig. 3 graphs the residual errors and log-errors. The further away the acoustic events,
the better our results. However, even for nearby events, for which the far field assumption
is clearly invalid, our approach generates results that are no worse than those of the plain
gradient descent technique.
We also implemented our approach using a physical sensor array. Fig. 4 plots empirical
results using a microphone array comprised of seven Crossbow sensor motes, one of which
is shown in Panel (a). Panels (b-d) compare the recovered structure with the one that
globally minimizes the LMS error, which we obtain by running gradient descent using the
hand-measured locations as starting point. Panel (a) in Fig. 4 shows the manually measured
locations; the relatively high deviation to the LMS optimum is the result of measurement
error, which is amplified by the fact that our motes are only spaced a few tens of centimeters
apart from each other (the standard deviation in the timing error corresponds to a distance
of 6.99cm, and the motes are placed between 14cm and 125cm apart). Panel (b) in Fig. 4
shows the solution of plain gradient descent applied to applied to Eq.8 and compares it
to the optimal reconstruction; and Panel (c) illustrates our solution. In all plots the lines
indicate residual error. This result shows that our method may work well on real-world
data that is noisy and that does not adhere to the far field assumption.
7
Discussion
This paper considered the structure from sound problem and presented an algorithm for
solving it. Our approach makes is possible to simultaneously recover the location of a
collection of microphones, the locations of external acoustic events detected by these microphones, and the emission times for these events. By resorting to affine geometry, our
approach overcomes the problem of local minima in the structure from sound problem.
There remain a number of open research issues. We believe the extension to 3-D is
mathematically straightforward but requires empirical validation. The current approach
also fails to address reverberation problems that are common in confined space. It shall
further be interesting to investigate data association problems in the SFS framework, and
to develop parallel algorithms that can be implemented on sensor networks with limited
communication resources. Finally, of great interest should be the incomplete data case in
which individual sensors may fail to detect acoustic events?a problem studied in [2].
Acknowledgement
The motes data was made available by Rahul Biswas, which is gratefully acknowledged.
We also acknowledge invaluable suggestions by three anonymous reviewers.
References
[1] S.T. Birchfield and A. Subramanya. Microphone array position calibration by basis-point classical multidimensional scaling. IEEE Trans. Speech and Audio Processing, forthcoming.
[2] R. Biswas and S. Thrun. A passive approach to sensor network localization. IROS-04.
[3] J.C. Chen, R.E. Hudson, and K. Yao. Maximum likelihod source localization and unknown sensor location estimation for wideband signals in the near-field. IEEE Trans. Signal Processing,
50, 2002.
[4] P. Corke, S. Hrabar, R. Peterson, D. Rus, S. Saripalli, and G. Sukhatme. Deployment and
connectivity repair of a sensor net with a flying robot. ISER-04.
[5] E. Elnahrawy, X. Li, and R. Martin. The limits of localization using signal strength: A comparative study. SECON-04.
[6] J. Elson and K. Romer. Wireless sensor networks: A new regime for time synchronization.
HotNets-02.
[7] S. Mahamud and M. Hebert. Iterative projective reconstruction from multiple views. CVPR-00.
[8] D. Niculescu and B. Nath. Ad hoc positioning system (APS). GLOBECOM-01.
[9] V.C. Raykar, I.V. Kozintsev, and R. Lienhart. Position calibration of microphones and loudspeakers in distributed computing platforms. IEEE transaction on Speech and Audio Processing, 13(1), 2005.
[10] J. Sallai, G. Balogh, M. Maroti, and A. Ledeczi. Acoustic ranging in resource-constrained
sensor networks. eCOTS-04.
[11] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. IJCV, 9(2), 1992.
[12] T.L. Tung, K. Yao, D. Chen, R.E. Hudson, and C.W. Reed. Source localization and spatial filtering using wideband music and maxiumum power beam forming for multimedia applications.
In SIPS-99.
| 2770 |@word norm:1 open:1 d2:6 calculus:2 seek:2 cos2:1 simulation:1 decomposition:4 initial:1 configuration:2 series:2 selecting:1 recovered:4 current:1 yet:1 written:1 must:2 subsequent:2 shape:1 enables:1 plot:5 gist:1 aps:1 v:3 guess:2 plane:1 detecting:1 location:31 arctan:1 simpler:1 dn:6 along:2 prove:2 ijcv:1 ray:1 x0:1 examine:1 multi:1 detects:1 decomposed:1 globally:3 kozintsev:1 actual:1 panel:7 what:1 argmin:4 cm:3 minimizes:2 finding:2 transformation:1 safely:1 y3:1 multidimensional:1 sip:1 hit:1 yn:2 t1:1 hudson:2 local:4 reformulates:1 tends:3 timing:1 limit:1 approximately:2 might:4 plus:1 studied:1 suggests:1 relaxing:1 co:9 deployment:1 ease:1 limited:2 factorization:2 wideband:2 range:4 projective:1 unique:1 camera:3 practice:5 orthographic:2 x3:1 danger:1 empirical:2 convenient:2 projection:2 confidence:7 quadrant:1 regular:2 cannot:1 put:1 seminal:1 optimize:1 reviewer:1 straightforward:1 starting:3 estimator:1 insight:1 array:17 embedding:1 coordinate:4 ortho:2 today:1 gps:2 origin:2 trick:1 located:1 labeled:1 tung:1 solved:2 thousand:1 region:1 ran:2 environment:2 solving:2 secon:1 flying:1 localization:7 basis:1 easily:5 differently:1 detected:2 outcome:1 whose:1 quite:1 stanford:4 presupposes:1 cvpr:1 consume:1 think:1 noisy:2 subramanya:1 obviously:1 hoc:3 sequence:1 eigenvalue:2 net:1 propose:1 reconstruction:4 product:2 relevant:1 trigonometric:1 poorly:2 amplified:1 getting:1 convergence:2 optimum:1 comparative:1 converges:2 rotated:1 illustrate:1 derive:1 develop:1 measured:4 eq:19 solves:1 recovering:5 predicted:1 involves:4 implemented:2 indicate:1 synchronized:1 met:1 direction:3 differ:1 require:2 hx:2 suffices:1 anonymous:1 mathematically:1 desc:9 extension:2 sufficiently:3 around:1 ground:2 considered:1 plagued:1 great:1 bj:21 visualize:1 lm:2 a2:1 estimation:1 travel:1 radio:1 establishes:2 weighted:1 minimization:1 clearly:4 sensor:52 gaussian:2 varying:2 derived:1 emission:12 consistently:1 rank:2 likelihood:1 contrast:1 sense:1 am:1 sin2:1 detect:1 niculescu:1 issue:1 denoted:3 spatial:3 platform:1 constrained:1 field:15 comprise:1 manually:1 identical:1 placing:1 cancel:1 others:2 t2:1 few:1 simultaneously:2 individual:2 replaced:1 geometry:8 consisting:1 detection:4 interest:4 investigate:1 violation:1 bracket:1 tj:5 emit:1 incomplete:1 euclidean:7 initialized:2 desired:1 circle:4 localizing:3 deviation:2 comprised:1 seventh:1 optimally:1 characterize:1 synthetic:1 calibrated:1 gd:1 person:2 straw:1 invertible:1 together:1 connecting:1 diverge:1 yao:2 intersecting:1 connectivity:1 classically:1 worse:1 external:4 li:1 b2:1 matter:1 ad:3 stream:1 later:1 view:1 lab:1 red:2 wave:2 recover:7 option:1 capability:1 parallel:3 minimize:1 air:1 square:7 elson:1 spaced:3 identify:1 yield:1 iid:2 mere:1 simultaneous:1 sebastian:1 manual:1 email:1 definition:1 evaluates:1 energy:1 sukhatme:1 involved:1 obvious:1 di:6 sampled:1 adjusting:1 knowledge:1 back:2 higher:1 rahul:1 formulation:3 arranged:2 generality:1 just:1 clock:1 until:1 hand:6 nonlinear:1 propagation:2 incrementally:1 quality:1 indicated:1 aj:21 graphed:1 believe:1 name:1 effect:1 y2:2 biswas:2 hence:3 sin:9 raykar:1 unambiguous:1 essence:1 generalized:1 motion:2 invaluable:1 passive:2 ranging:1 image:1 consideration:1 common:1 rotation:1 specialized:1 physical:1 empirically:1 association:3 approximates:1 measurement:6 ai:1 vanilla:1 resorting:1 uv:2 trivially:1 iser:1 grid:1 gratefully:1 dot:1 calibration:7 robot:1 optimizing:1 apart:3 yi:12 minimum:6 additional:1 determine:1 signal:6 dashed:1 multiple:2 sound:29 desirable:1 reduces:1 positioning:1 long:2 equally:1 a1:1 plugging:1 vision:1 iteration:1 orthography:1 confined:1 beam:1 whereas:2 interval:7 diagram:1 singular:3 source:4 adhere:1 induced:1 tend:1 spirit:1 nath:1 emitted:2 near:2 identically:1 easy:1 forthcoming:1 restrict:1 tm:1 grad:9 expression:5 speech:2 nine:1 useful:1 involve:1 sfs:15 transforms:1 ten:2 hardware:1 diameter:5 generate:2 exist:1 mirrored:1 notice:2 estimated:1 blue:2 write:1 shall:8 key:2 four:1 acknowledged:1 d3:3 iros:1 graph:3 sum:1 enforced:1 angle:13 reader:1 mahamud:1 scaling:2 sfm:2 entirely:2 followed:2 correspondence:1 quadratic:2 strength:2 constraint:4 x2:2 nearby:1 generates:1 speed:1 argument:1 relatively:2 mote:9 martin:1 loudspeaker:1 poor:1 describes:1 remain:1 globecom:1 alike:1 repair:1 equation:1 resource:2 previously:1 fail:2 know:2 available:1 apply:2 away:6 indirectly:1 appropriate:1 romer:1 robustness:1 substitute:1 original:1 assumes:1 denotes:3 remaining:1 running:1 music:1 especially:1 classical:1 question:1 occurs:1 md:1 diagonal:1 gradient:21 distance:7 cw:3 mapped:1 thrun:3 simulated:2 acoustical:1 seven:2 ru:1 length:1 reed:1 providing:1 minimizing:1 ratio:2 saripalli:1 setup:1 unfortunately:1 difficult:1 birchfield:1 likelihod:1 negative:1 reverberation:3 corke:1 reliably:2 unknown:9 acknowledge:1 descent:17 defining:1 extended:1 communication:1 y1:2 varied:1 arbitrary:2 tomasi:2 acoustic:35 established:1 trans:2 address:2 able:1 bar:1 regime:1 green:2 power:1 event:48 treated:1 rely:1 solvable:1 residual:4 axis:1 carried:2 l2:1 acknowledgement:1 relative:11 law:2 synchronization:1 loss:1 interesting:2 suggestion:1 mounted:1 filtering:1 analogy:1 validation:1 incident:10 affine:12 surrounded:1 translation:1 row:1 centimeter:1 course:1 clap:3 placed:2 wireless:3 free:2 hebert:1 peterson:1 lienhart:1 absolute:1 distributed:2 regard:1 dimension:3 xn:2 world:2 plain:4 stuck:1 author:1 collection:1 made:1 bm:1 far:19 transaction:1 overcomes:2 global:1 active:1 incoming:2 receiver:1 b1:1 xi:12 search:2 iterative:1 kanade:2 ca:1 big:1 noise:6 arrival:5 x1:2 fig:6 deployed:1 fails:1 position:2 concern:2 exists:1 illustrates:1 gap:1 chen:2 subtract:1 infinitely:1 forming:1 failed:1 expressed:1 corresponds:1 truth:2 environmental:1 relies:2 formulated:1 invalid:1 replace:1 absence:2 content:1 hard:1 man:1 experimentally:1 specifically:5 perceiving:1 typical:2 uniformly:1 wt:1 averaging:1 microphone:22 multimedia:1 svd:24 experimental:2 pertains:2 violated:1 audio:2 d1:14 |
1,950 | 2,771 | Comparing the Effects of Different Weight
Distributions on Finding Sparse Representations
David Wipf and Bhaskar Rao ?
Department of Electrical and Computer Engineering
University of California, San Diego, CA 92093
[email protected], [email protected]
Abstract
Given a redundant dictionary of basis vectors (or atoms), our goal is to
find maximally sparse representations of signals. Previously, we have
argued that a sparse Bayesian learning (SBL) framework is particularly
well-suited for this task, showing that it has far fewer local minima than
other Bayesian-inspired strategies. In this paper, we provide further evidence for this claim by proving a restricted equivalence condition, based
on the distribution of the nonzero generating model weights, whereby the
SBL solution will equal the maximally sparse representation. We also
prove that if these nonzero weights are drawn from an approximate Jeffreys prior, then with probability approaching one, our equivalence condition is satisfied. Finally, we motivate the worst-case scenario for SBL
and demonstrate that it is still better than the most widely used sparse representation algorithms. These include Basis Pursuit (BP), which is based
on a convex relaxation of the ?0 (quasi)-norm, and Orthogonal Matching Pursuit (OMP), a simple greedy strategy that iteratively selects basis
vectors most aligned with the current residual.
1
Introduction
In recent years, there has been considerable interest in finding sparse signal representations
from redundant dictionaries [1, 2, 3, 4, 5]. The canonical form of this problem is given by,
min kwk0 ,
s.t. t = ?w,
(1)
w
where ? ? RN ?M is a matrix whose columns represent an overcomplete or redundant
basis (i.e., rank(?) = N and M > N ), w ? RM is the vector of weights to be learned,
and t is the signal vector. The cost function being minimized represents the ?0 (quasi)-norm
of w (i.e., a count of the nonzero elements in w).
Unfortunately,
an exhaustive search for the optimal representation requires the solution of
up to M
linear
systems of size N ? N , a prohibitively expensive procedure for even
N
modest values of M and N . Consequently, in practical situations there is a need for approximate procedures that efficiently solve (1) with high probability. To date, the two most
widely used choices are Basis Pursuit (BP) [1] and Orthogonal Matching Pursuit (OMP)
[5]. BP is based on a convex relaxation of the ?0 norm, i.e., replacing kwk0 with kwk1 ,
which leads to an attractive, unimodal optimization problem that can be readily solved via
linear programming. In contrast, OMP is a greedy strategy that iteratively selects the basis
?
This work was supported by DiMI grant 22-8376, Nissan, and NSF grant DGE-0333451.
vector most aligned with the current signal residual. At each step, a new approximant is
formed by projecting t onto the range of all the selected dictionary atoms.
Previously [9], we have demonstrated an alternative algorithm for solving (1) using a sparse
Bayesian learning (SBL) framework [6] that maintains several significant advantages over
other, Bayesian-inspired strategies for finding sparse solutions [7, 8]. The most basic formulation begins with an assumed likelihood model of the signal t given weights w,
1
(2)
p(t|w) = (2?? 2 )?N/2 exp ? 2 kt ? ?wk22 .
2?
To provide a regularizing mechanism, SBL uses the parameterized weight prior
M
Y
w2
?1/2
p(w; ?) =
(2??i )
exp ? i ,
2?i
i=1
(3)
where ? = [?1 , . . . , ?M ]T is a vector of M hyperparameters controlling the prior variance
of each weight. These hyperparameters can be estimated from the data by marginalizing
over the weights and then performing ML optimization. The cost function for this task is
Z
L(?) = ? log p(t|w)p(w; ?)dw ? log |?t | + tT ??1
(4)
t t,
where ?t , ? 2 I + ???T and we have introduced the notation ? , diag(?). This procedure, which can be implemented via the EM algorithm (or some other technique), is
referred to as evidence maximization or type-II maximum likelihood [6]. Once ? has been
estimated, a closed-form expression for the posterior weight distribution is available.
Although SBL was initially developed in a regression context, it can be easily adapted to
handle (1) in the limit as ? 2 ? 0. To accomplish this we must reexpress the SBL iterations
to handle the low noise limit. Applying various matrix identities to the EM algorithm-based
update rules for each iteration, we arrive at the modified update [9]
?
1/2
1/2
? (old) w
? (Told) + I ? ?(old) ??(old) ? ?(old)
?(new) = diag w
? (new)
w
?
1/2
1/2
= ?(new) ??(new) t,
(5)
where (?)? denotes the Moore-Penrose pseudo-inverse. Given that t ? range(?) and assuming ? is initialized with all nonzero elements, then feasibility is enforced at every itera? We will henceforth refer to wSBL as the solution of this algorithm when
tion, i.e., t = ?w.
? = ?? t.1 In [9] (which extends work in [10]), we have argued
initialized at ? = IM and w
SBL
why w should be considered a viable candidate for solving (1).
In comparing BP, OMP, and SBL, we would ultimately like to know in what situations a
particular algorithm is likely to find the maximally sparse solution. A variety of results stipulate rigorous conditions whereby BP and OMP are guaranteed to solve (1) [1, 4, 5]. All
of these conditions depend explicitly on the number of nonzero elements contained in the
optimal solution. Essentially, if this number is less than some ?-dependent constant ?, the
BP/OMP solution is proven to be equivalent to the minimum ?0 -norm solution. Unfortunately however, ? turns out to be restrictively small and, for a fixed redundancy ratio M/N ,
grows very slowly as N becomes large [3]. But in practice, both approaches still perform
well even when these equivalence conditions have been grossly violated. To address this
issue, a much looser bound has recently been produced for BP, dependent only on M/N .
This bound holds for ?most? dictionaries in the limit as N becomes large [3], where ?most?
1
Based on EM convergence properties, the algorithm will converge monotonically to a fixed point.
is with respect to dictionaries composed of columns drawn uniformly from the surface of
an N -dimensional unit hypersphere. For example, with M/N = 2, it is argued that BP is
capable of resolving sparse solutions with roughly 0.3N nonzero elements with probability
approaching one as N ? ?.
Turning to SBL, we have neither a convenient convex cost function (as with BP) nor a
simple, transparent update rule (as with OMP); however, we can nonetheless come up with
an alternative type of equivalence result that is neither unequivocally stronger nor weaker
than those existing results for BP and OMP. This condition is dependent on the relative
magnitudes of the nonzero elements embedded in optimal solutions to (1). Additionally,
we can leverage these ideas to motivate which sparse solutions are the most difficult to find.
Later, we provide empirical evidence that SBL, even in this worst-case scenario, can still
outperform both BP and OMP.
2
Equivalence Conditions for SBL
In this section, we establish conditions whereby wSBL will minimize (1). To state these
results, we require some notation. First, we formally define a dictionary ? = [?1 , . . . , ?M ]
as a set of M unit ?2 -norm vectors (atoms) in RN , with M > N and rank(?) = N . We
say that a dictionary satisfies the unique representation property (URP) if every subset of
?
N atoms forms a basis in RN . We define w(i) as the i-th largest weight magnitude and w
as the kwk0 -dimensional vector containing all the nonzero weight magnitudes of w. The
set of optimal solutions to (1) is W ? with cardinality |W ? |. The diversity (or anti-sparsity)
of each w? ? W ? is defined as D? , kw? k0 .
Result 1. For a fixed dictionary ? that satisfies the URP, there exists a set of M ? 1 scaling
constants ?i ? (0, 1] (i.e., strictly greater than zero) such that, for any t = ?w? generated
with
?
?
w(i+1)
? ?i w(i)
i = 1, . . . , M ? 1,
(6)
SBL will produce a solution that satisfies kwSBL k0 = min(N, kw? k0 ) and wSBL ? W ? .
Do to space limitations, the proof has been deferred to [11]. The basic idea is that, as
the magnitude differences between weights increase, at any given scale, the covariance
?t embedded in the SBL cost function is dominated by a single dictionary atom such that
problematic local minimum are removed. The unique, global minimum in turn achieves the
stated result.2 The most interesting case occurs when kw? k0 < N , leading to the following:
Corollary 1. Given the additional restriction kw? k0 < N , then wSBL = w? ? W ? and
|W ? | = 1, i.e., SBL will find the unique, maximally sparse representation of the signal t.
See [11] for the proof. These results are restrictive in the sense that the dictionary dependent
constants ?i significantly confine the class of signals t that we may represent. Moreover,
we have not provided any convenient means of computing what the different scaling constants might be. But we have nonetheless solidified the notion that SBL is most capable of
recovering weights of different scales (and it must still find all D? nonzero weights no matter how small some of them may be). Additionally, we have specified conditions whereby
we will find the unique w? even when the diversity is as large as D? = N ? 1. The tighter
BP/OMP bound from [1, 4, 5] scales as O N ?1/2 , although this latter bound is much
more general in that it is independent of the magnitudes of the nonzero weights.
In contrast, neither BP or OMP satisfy a comparable result; in both cases, simple 3D
counter examples suffice to illustrate this point.3 We begin with OMP. Assume the fol2
Because we have effectively shown that the SBL cost function must be unimodal, etc., any proven
descent method could likely be applied in place of (5) to achieve the same result.
3
While these examples might seem slightly nuanced, the situations being illustrated can occur
frequently in practice and the requisite column normalization introduces some complexity.
lowing:
?
1
? ? ?
w? = ?
0 ?
0
?
?
0
? 0
?=?
1
?1
2
0
?1
2
0
1
0
?1
1.01
?0.1
1.01
0
?
?
?
?
t = ?w? = ?
??
2
?
?,
0
1 + ??2
(7)
where ? satisfies the URP and has columns ?i of unit ?2 norm. Given any ? ? (0, 1),
we will now show that OMP will necessarily fail to find w? . Provided ? < 1, at the first
iteration OMP will select ?1 , which solves maxi |tT ?i |, leaving the residual vector
?
r1 = I ? ?1 ?T1 t = [ ?/ 2 0 0 ]T .
(8)
Next, ?4 will be chosen since it has the largest value in the top position, thus solving
maxi |r1T ?i |. The residual is then updated to become
?
? [ 1 ?10 0 ]T .
r2 = I ? [ ?1 ?4 ][ ?1 ?4 ]T t =
(9)
101 2
From the remaining two columns, r2 is most highly correlated with ?3 . Once ?3 is selected, we obtain zero residual error, yet we did not find w? , which involves only ?1 and
?2 . So for all ? ? (0, 1), the algorithm fails. As such, there can be no fixed constant ? > 0
?
?
such that if w(2)
? ? ? ?w(1)
? ?, we are guaranteed to obtain w? (unlike with SBL).
We now give an analogous example for BP, where we present a feasible
smaller ?1 norm than the maximally sparse solution. Given
?
?
?
?
"
1
?0.1
0 1 ?0.1
1.02
1.02
? ? ?
? 0 0 ??0.1 ?0.1 ?
?
?
t = ?w =
w =?
?=?
1.02
1.02 ?
0 ?
?1
?1
1
0
0
1.02
1.02
solution with
?
0
1
#
,
(10)
it is clear that kw? k1 = 1 + ?. However, for all ? ? (0, 0.1), if we form a
feasible solution using only ?1 , ?3 , and ?4 , we obtain the alternate solution w =
T
?
?
with kwk1 ? 1 + 0.1?. Since this has a smaller
(1 ? 10?) 0 5 1.02? 5 1.02?
?1 norm for all ? in the specified range, BP will necessarily fail and so again, we cannot
reproduce the result for a similar reason as before.
At this point, it remains unclear what probability distributions are likely to produce weights
that satisfy the conditions of Result 1. It turns out that the Jeffreys prior, given by
p(x) ? 1/x, is appropriate for this task. This distribution has the unique property that
the probability mass assigned to any given scaling is equal. More explicitly, for any s ? 1,
P x ? si , si+1 ? log(s) ?i ? Z.
(11)
For example, the probability that x is between 1 and 10 equals the probability that it lies
between 10 and 100 or between 0.01 and 0.1. Because this is an improper density, we
define an approximate Jeffreys prior with range parameter a ? (0, 1]. Specifically, we say
that x ? J(a) if
?1
for x ? [a, 1/a].
(12)
p(x) =
2 log(a)x
With this definition in mind, we present the following result.
Result 2. For a fixed ? that satisfies the URP, let t be generated by t = ?w? , where w?
has magnitudes drawn iid from J(a). Then as a approaches zero, the probability that we
obtain a w? such that the conditions of Result 1 are satisfied approaches unity.
Again, for space considerations, we refer the reader to [11]. However, on a conceptual
level this result can be understood by considering the distribution of order statistics. For
example, given M samples from a uniform distribution between zero and some ?, with
probability approaching one, the distance between the k-th and (k +1)-th order statistic can
be made arbitrarily large as ? moves towards infinity. Likewise, with the J(a) distribution,
the relative scaling between order statistics can be increased without bound as a decreases
towards zero, leading to the stated result.
Corollary 2. Assume that D? < N randomly selected elements of w? are set to zero.
Then as a approaches zero, the probability that we satisfy the conditions of Corollary 1
approaches unity.
In conclusion, we have shown that a simple, (approximate) noninformative Jeffreys prior
leads to sparse inverse problems that are optimally solved via SBL with high probability.
Interestingly, it is this same Jeffreys prior that forms the implicit weight prior of SBL (see
[6], Section 5.1). However, it is worth mentioning
that other Jeffreys prior-based techQ
niques, e.g., direct minimization of p(w) = i |w1i | subject to t = ?w, do not provide
any SBL-like guarantees. Although several algorithms do exist that can perform such a
minimization task (e.g., [7, 8]), they perform poorly with respect to (1) because of convergence to local minimum as shown in [9, 10]. This is especially true if the weights are highly
scaled, and no nontrivial equivalence results are known to exist for these procedures.
3
Worst-Case Scenario
If the best-case scenario occurs when the nonzero weights are all of very different scales,
it seems reasonable that the most difficult sparse inverse problem may involve weights of
?
the same or even identical scale, e.g., w
?1? = w
?2? = . . . w
?D
? . This notion can be formalized
?
?
somewhat by considering the w distribution that is furthest from the Jeffreys prior. First,
we note that both the SBL cost function and update rules are independent of the overall
? ? is functionally equivalent to w
? ? provided
scaling of the generating weights, meaning ?w
? is nonzero. This invariance must be taken
account in our analysis. Therefore, we
P into
assume the weights are rescaled such that i w
?i? = 1. Given this restriction, we will find
the distribution of weight magnitudes that is most different from the Jeffreys prior.
Using the standard procedure for changing the parameterization of a probability density,
the joint density of the constrained variables can be computed simply as
?
p(w
?1? , . . . , w
?D
?)
1
? QD?
?i?
i=1 w
?
for
D
X
i=1
w
?1? = w
?2?
w
?i? = 1, w
?i? ? 0, ?i.
(13)
?
From this expression, it is easily shown that
= ... = w
?D
? achieves the global
minimum. Consequently, equal weights are the absolute least likely to occur from the
Jeffreys prior. Hence, we may argue that the distribution that assigns w
?i? = 1/D? with
probability one is furthest from the constrained Jeffreys prior.
Nevertheless, because of the complexity of the SBL framework, it is difficult to prove ax? ? ? 1 is overall the most problematic distribution with respect to sparse
iomatically that w
recovery. We can however provide additional motivation for why we should expect it to
be unwieldy. As proven in [9], the global minimum of the SBL cost function is guaranteed to produce some w? ? W ? . This minimum is achieved with the hyperparameters
?i? = (wi? )2 , ?i. We can think of this solution as forming a collapsed, or degenerate covariance ??t = ??? ?T that occupies a proper D? -dimensional subspace of N -dimensional
signal space. Moreover, this subspace must necessarily contain the signal vector t. Essentially, ??t proscribes infinite density to t, leading to the globally minimizing solution.
Now consider an alternative covariance ??t that, although still full rank, is nonetheless illconditioned (flattened), containing t within its high density region. Furthermore, assume
that ??t is not well aligned with the subspace formed by ??t . The mixture of two flattened, yet misaligned covariances naturally leads to a more voluminous (less dense) form
as measured by the determinant |???t + ???t |. Thus, as we transition from ??t to ??t , we
necessarily reduce the density at t, thereby increasing the cost function L(?). So if SBL
converges to ??t it has fallen into a local minimum.
? ? are likely to create the most situations where
So the question remains, what values of w
this type of local minima occurs? The issue is resolved when we again consider the D? dimensional subspace
by ??t . The volume of the covariance within this sub ?determined
?T
?
?
?
? ? and ?
? ? are the basis vectors and hyperparameters
space is given by ?? ? , where ?
?
? . The larger this volume, the higher the probability that other basis vecassociated with w
tors will be suitably positioned so as to both (i), contain t within the high density portion
and (ii), maintain a sufficient component that is misaligned with the optimal covariance.
? ? ?T
? ?
? ?
? under the constraints P w
The maximum volume of ?
? ? = 1 and ?? ? = (w
? ? )2
i
i
i
i
occurs with ??i? = 1/(D? )2 , i.e., all the w
?i? are equal. Consequently, geometric considerations support the notion that deviance from the Jeffreys prior leads to difficulty recovering
w? . Moreover, empirical analysis (not shown) of the relationship between volume and
local minimum avoidance provide further corroboration of this hypothesis.
4
Empirical Comparisons
The central purpose of this section is to present empirical evidence that supports our theoretical analysis and illustrates the improved performance afforded by SBL. As previously
mentioned, others have established deterministic equivalence conditions, dependent on D? ,
whereby BP and OMP are guaranteed to find the unique w? . Unfortunately, the relevant
theorems are of little value in assessing practical differences between algorithms. This is
because, in the cases we have tested where BP/OMP equivalence is provably known to hold
(e.g., via results in [1, 4, 5]), SBL always converges to w? as well.
As such, we will focuss our attention on the insights provided by Sections 2 and 3 as well
as probabilistic comparisons with [3]. Given a fixed distribution for the nonzero elements
of w? , we will assess which algorithm is best (at least empirically) for most dictionaries
relative to a uniform measure on the unit sphere as discussed.
To this effect, a number of monte-carlo simulations were conducted, each consisting of the
following: First, a random, overcomplete N ? M dictionary ? is created whose entries
are each drawn uniformly from the surface of an N -dimensional hypersphere. Next, sparse
weight vectors w? are randomly generated with D? nonzero entries. Nonzero amplitudes
? ? are drawn iid from an experiment-dependent distribution. Response values are then
w
computed as t = ?w? . Each algorithm is presented with t and ? and attempts to estimate
w? . In all cases, we ran 1000 independent trials and compared the number of times each
algorithm failed to recover w? . Under the specified conditions for the generation of ?
and t, all other feasible solutions w almost surely have a diversity greater than D? , so
our synthetically generated w? must be maximally sparse. Moreover, ? will almost surely
satisfy the URP.
With regard to particulars, there are essentially four variables with which to experiment: (i)
? ? , (ii) the diversity D? , (iii) N , and (iv) M . In Figure 1, we display
the distribution of w
results from an array of testing conditions. In each row of the figure, w
?i? is drawn iid from
a fixed distribution for all i; the first row uses w
?i? = 1, the second has w
?i? ? J(a = 0.001),
?
and the third uses w
?i ? N (0, 1), i.e., a unit Gaussian. In all cases, the signs of the nonzero
weights are irrelevant due to the randomness inherent in the basis vectors.
The columns of Figure 1 are organized as follows: The first column is based on the values
N = 50, D? = 16, while M is varied from N to 5N , testing the effects of an increasing
level of dictionary redundancy, M/N . The second fixes N = 50 and M = 100 while D?
is varied from 10 to 30, exploring the ability of each algorithm to resolve an increasing
number of nonzero weights. Finally, the third column fixes M/N = 2 and D? /N ? 0.3
while N , M , and D? are increased proportionally. This demonstrates how performance
scales with larger problem sizes.
Error Rate
(w/ unit weights)
Redundancy Test
(N = 50, D* = 16)
Error Rate
(w/ Jeffreys weights)
Signal Size Test
(M/N = 2, D*/N = 0.32)
1
1
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
Error Rate
(w/ Gaussian weights)
Diversity Test
(N = 50, M = 100)
1
2
3
4
5
0
10
15
20
25
30
0
25
1
1
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
1
2
3
4
5
0
10
15
20
25
30
0
25
1
1
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
1
2
3
4
5
Redundancy Ratio (M/N)
0
10
15
20
25
Diversity (D*)
30
0
25
50
75
100 125 150
OMP
BP
SBL
50
75
100 125 150
50 75 100 125 150
Signal Size (N)
Figure 1: Empirical results comparing the probability that OMP, BP, and SBL fail to find
w? under various testing conditions. Each data point is based on 1000 independent trials.
The distribution of the nonzero weight amplitudes is labeled on the far left for each row,
while the values for N , M , and D? are included on the top of each column. Independent
variables are labeled along the bottom of the figure.
The first row of plots essentially represents the worst-case scenario for SBL per our previous analysis, and yet performance is still consistently better than both BP and OMP. In
contrast, the second row of plots approximates the best-case performance for SBL, where
we see that SBL is almost infallible. The handful of failure events that do occur are because
a is not sufficiently small and therefore, J(a) was not sufficiently close to a true Jeffreys
prior to achieve perfect equivalence (see center plot). Although OMP also does well here,
the parameter a can generally never be adjusted such that OMP always succeeds. Finally,
the last row of plots, based on Gaussian distributed weight amplitudes, reflects a balance
between these two extremes. Nonetheless, SBL still holds a substantial advantage.
In general, we observe that SBL is capable of handling more redundant dictionaries (column one) and resolving a larger number of nonzero weights (column two). Also, column
three illustrates that both BP and SBL are able to resolve a number of weights that grows
linearly in the signal dimension (? 0.3N ), consistent with the analysis in [3] (which applies
only to BP). In contrast, OMP performance begins to degrade in some cases (see the upper
right plot), a potential limitation of this approach. Of course additional study is necessary
to fully compare the relative performance of these methods on large-scale problems.
Finally, by comparing row one, two and three, we observe that the performance of BP is
roughly independent of the weight distribution, with performance slightly below the worst-
case SBL performance. Like SBL, OMP results are highly dependent on the distribution;
however, as the weight distribution approaches unity, performance is unsatisfactory. In
summary, while the relative proficiency between OMP and BP is contingent on experimental particulars, SBL is uniformly superior in the cases we have tested (including examples
not shown, e.g., results with other dictionary types).
5
Conclusions
In this paper, we have related the ability to find maximally sparse solutions to the particular distribution of amplitudes that compose the nonzero elements. At first glance, it may
seem reasonable that the most difficult sparse inverse problems occur when some of the
nonzero weights are extremely small, making them difficult to estimate. Perhaps surprisingly then, we have shown that the exact opposite is true with SBL: The more diverse the
weight magnitudes, the better the chances we have of learning the optimal solution. In
contrast, unit weights offer the most challenging task for SBL. Nonetheless, even in this
worst-case scenario, we have shown that SBL outperforms the current state-of-the-art; the
overall assumption here being that, if worst-case performance is superior, then it is likely
to perform better in a variety of situations.
For a fixed dictionary and diversity D? , successful recovery of unit weights does not absolutely guarantee that any alternative weighting scheme will necessarily be recovered as
well. However, a weaker result does appear to be feasible: For fixed values of N , M ,
and D? , if the success rate recovering unity weights approaches one for most dictionaries, where most is defined as in Section 1, then the success rate recovering weights of any
other distribution (assuming they are distributed independently of the dictionary) will also
approach one. While a formal proof of this conjecture is beyond the scope of this paper,
it seems to be a very reasonable result that is certainly born out by experimental evidence,
geometric considerations, and the arguments presented in Section 3. Nonetheless, this remains a fruitful area for further inquiry.
References
[1] D. Donoho and M. Elad, ?Optimally sparse representation in general (nonorthogonal) dictionaries via ?1 minimization,? Proc. Nat. Acad. Sci., vol. 100, no. 5, pp. 2197?2202, March 2003.
[2] R. Gribonval and M. Nielsen, ?Sparse representations in unions of bases,? IEEE Transactions
on Information Theory, vol. 49, pp. 3320?3325, Dec. 2003.
[3] D. Donoho, ?For most large underdetermined systems of linear equations the minimal ?1 -norm
solution is also the sparsest solution,? Stanford University Technical Report, September 2004.
[4] J.J. Fuchs, ?On sparse representations in arbitrary redundant bases,? IEEE Transactions on
Information Theory, vol. 50, no. 6, pp. 1341?1344, June 2004.
[5] J.A. Tropp, ?Greed is good: Algorithmic results for sparse approximation,? IEEE Transactions
on Information Theory, vol. 50, no. 10, pp. 2231?2242, October 2004.
[6] M.E. Tipping, ?Sparse Bayesian learning and the relevance vector machine,? Journal of Machine
Learning Research, vol. 1, pp. 211?244, 2001.
[7] I.F. Gorodnitsky and B.D. Rao, ?Sparse signal reconstruction from limited data using FOCUSS:
A re-weighted minimum norm algorithm,? IEEE Transactions on Signal Processing, vol. 45, no.
3, pp. 600?616, March 1997.
[8] M.A.T. Figueiredo, ?Adaptive sparseness using Jeffreys prior,? Advances in Neural Information
Processing Systems 14, pp. 697?704, 2002.
[9] D.P. Wipf and B.D. Rao, ??0 -norm minimization for basis selection,? Advances in Neural
Information Processing Systems 17, pp. 1513?1520, 2005.
[10] D.P. Wipf and B.D. Rao, ?Sparse Bayesian learning for basis selection,? IEEE Transactions on
Signal Processing, vol. 52, no. 8, pp. 2153?2164, 2004.
[11] D.P. Wipf, To appear in Bayesian Methods for Sparse Signal Representation, PhD Dissertation,
UC San Diego, 2006 (estimated). http://dsp.ucsd.edu/?dwipf/
| 2771 |@word trial:2 determinant:1 seems:2 norm:11 stronger:1 suitably:1 simulation:1 covariance:6 thereby:1 born:1 interestingly:1 outperforms:1 existing:1 current:3 comparing:4 recovered:1 si:2 yet:3 must:6 readily:1 noninformative:1 plot:5 update:4 greedy:2 fewer:1 selected:3 parameterization:1 urp:5 gribonval:1 dissertation:1 hypersphere:2 proficiency:1 along:1 direct:1 become:1 viable:1 prove:2 compose:1 roughly:2 nor:2 frequently:1 inspired:2 globally:1 resolve:2 little:1 kwk0:3 cardinality:1 itera:1 begin:3 becomes:2 notation:2 moreover:4 provided:4 suffice:1 mass:1 increasing:3 what:4 developed:1 lowing:1 finding:3 guarantee:2 pseudo:1 every:2 prohibitively:1 rm:1 scaled:1 demonstrates:1 unit:8 grant:2 appear:2 t1:1 before:1 engineering:1 local:6 understood:1 limit:3 acad:1 might:2 equivalence:9 challenging:1 misaligned:2 mentioning:1 limited:1 range:4 practical:2 unique:6 testing:3 practice:2 union:1 procedure:5 area:1 empirical:5 significantly:1 matching:2 convenient:2 deviance:1 onto:1 cannot:1 close:1 selection:2 context:1 applying:1 collapsed:1 restriction:2 equivalent:2 deterministic:1 demonstrated:1 center:1 fruitful:1 attention:1 independently:1 convex:3 formalized:1 recovery:2 assigns:1 rule:3 avoidance:1 insight:1 array:1 dw:1 proving:1 handle:2 notion:3 analogous:1 updated:1 diego:2 controlling:1 exact:1 programming:1 us:3 hypothesis:1 element:8 expensive:1 particularly:1 labeled:2 bottom:1 electrical:1 solved:2 worst:7 region:1 improper:1 counter:1 removed:1 decrease:1 rescaled:1 ran:1 mentioned:1 substantial:1 complexity:2 ultimately:1 motivate:2 depend:1 solving:3 basis:12 easily:2 joint:1 resolved:1 k0:5 various:2 monte:1 exhaustive:1 whose:2 widely:2 solve:2 larger:3 say:2 elad:1 stanford:1 ability:2 statistic:3 think:1 advantage:2 reconstruction:1 stipulate:1 aligned:3 relevant:1 date:1 poorly:1 achieve:2 degenerate:1 convergence:2 r1:1 assessing:1 produce:3 generating:2 perfect:1 converges:2 illustrate:1 measured:1 solves:1 implemented:1 recovering:4 involves:1 come:1 qd:1 occupies:1 argued:3 require:1 transparent:1 fix:2 tighter:1 gorodnitsky:1 underdetermined:1 im:1 adjusted:1 strictly:1 exploring:1 hold:3 confine:1 considered:1 sufficiently:2 exp:2 scope:1 nonorthogonal:1 algorithmic:1 claim:1 tor:1 dictionary:19 achieves:2 purpose:1 proc:1 largest:2 create:1 reflects:1 weighted:1 minimization:4 always:2 gaussian:3 modified:1 corollary:3 ax:1 focus:2 june:1 dsp:1 consistently:1 rank:3 likelihood:2 unsatisfactory:1 contrast:5 rigorous:1 sense:1 dependent:7 initially:1 quasi:2 reproduce:1 selects:2 voluminous:1 provably:1 issue:2 overall:3 constrained:2 art:1 uc:1 equal:5 once:2 never:1 atom:5 identical:1 represents:2 kw:5 wipf:4 minimized:1 others:1 report:1 inherent:1 randomly:2 composed:1 consisting:1 maintain:1 attempt:1 interest:1 highly:3 certainly:1 deferred:1 introduces:1 mixture:1 extreme:1 kt:1 capable:3 necessary:1 orthogonal:2 modest:1 iv:1 old:4 initialized:2 re:1 overcomplete:2 theoretical:1 minimal:1 increased:2 column:12 rao:4 w1i:1 maximization:1 cost:8 subset:1 entry:2 uniform:2 successful:1 conducted:1 optimally:2 accomplish:1 density:7 told:1 probabilistic:1 again:3 central:1 satisfied:2 containing:2 slowly:1 henceforth:1 leading:3 approximant:1 account:1 potential:1 diversity:7 matter:1 satisfy:4 explicitly:2 tion:1 later:1 closed:1 portion:1 recover:1 maintains:1 ass:1 formed:2 minimize:1 variance:1 illconditioned:1 efficiently:1 likewise:1 bayesian:7 fallen:1 considering:2 produced:1 iid:3 carlo:1 dimi:1 worth:1 randomness:1 inquiry:1 definition:1 failure:1 grossly:1 nonetheless:6 pp:9 naturally:1 proof:3 organized:1 amplitude:4 positioned:1 nielsen:1 higher:1 tipping:1 response:1 maximally:7 improved:1 formulation:1 furthermore:1 implicit:1 tropp:1 replacing:1 glance:1 perhaps:1 nuanced:1 dge:1 grows:2 effect:3 contain:2 true:3 hence:1 assigned:1 nonzero:21 iteratively:2 moore:1 illustrated:1 attractive:1 whereby:5 tt:2 demonstrate:1 meaning:1 regularizing:1 consideration:3 sbl:41 recently:1 superior:2 empirically:1 volume:4 discussed:1 approximates:1 functionally:1 significant:1 refer:2 surface:2 etc:1 base:2 posterior:1 recent:1 irrelevant:1 scenario:6 arbitrarily:1 kwk1:2 success:2 minimum:12 greater:2 additional:3 somewhat:1 omp:24 contingent:1 surely:2 converge:1 redundant:5 monotonically:1 signal:16 ii:3 resolving:2 full:1 unimodal:2 technical:1 offer:1 sphere:1 feasibility:1 basic:2 regression:1 essentially:4 iteration:3 represent:2 normalization:1 achieved:1 dec:1 leaving:1 w2:1 unlike:1 subject:1 bhaskar:1 seem:2 reexpress:1 leverage:1 synthetically:1 iii:1 variety:2 approaching:3 opposite:1 reduce:1 idea:2 expression:2 fuchs:1 greed:1 generally:1 clear:1 involve:1 proportionally:1 http:1 outperform:1 exist:2 canonical:1 nsf:1 problematic:2 restrictively:1 sign:1 estimated:3 per:1 diverse:1 vol:7 redundancy:4 four:1 nevertheless:1 drawn:6 changing:1 neither:3 relaxation:2 year:1 enforced:1 inverse:4 parameterized:1 arrive:1 extends:1 place:1 reader:1 reasonable:3 looser:1 almost:3 scaling:5 comparable:1 corroboration:1 bound:5 guaranteed:4 display:1 nontrivial:1 adapted:1 occur:4 infinity:1 constraint:1 handful:1 bp:24 afforded:1 dominated:1 argument:1 min:2 extremely:1 performing:1 conjecture:1 department:1 alternate:1 march:2 smaller:2 slightly:2 em:3 unity:4 wi:1 making:1 jeffreys:14 projecting:1 restricted:1 taken:1 equation:1 previously:3 remains:3 turn:3 count:1 mechanism:1 fail:3 know:1 mind:1 pursuit:4 available:1 observe:2 appropriate:1 alternative:4 denotes:1 top:2 include:1 remaining:1 restrictive:1 k1:1 especially:1 establish:1 move:1 question:1 occurs:4 strategy:4 unclear:1 september:1 subspace:4 distance:1 sci:1 degrade:1 argue:1 reason:1 furthest:2 assuming:2 relationship:1 ratio:2 minimizing:1 balance:1 difficult:5 unfortunately:3 october:1 nissan:1 stated:2 proper:1 perform:4 upper:1 descent:1 anti:1 situation:5 rn:3 ucsd:3 varied:2 arbitrary:1 david:1 introduced:1 specified:3 california:1 learned:1 established:1 address:1 able:1 beyond:1 below:1 sparsity:1 including:1 event:1 difficulty:1 dwipf:2 turning:1 residual:5 scheme:1 created:1 prior:16 geometric:2 marginalizing:1 relative:5 embedded:2 fully:1 expect:1 interesting:1 limitation:2 generation:1 proven:3 sufficient:1 consistent:1 unequivocally:1 row:7 course:1 summary:1 supported:1 last:1 surprisingly:1 figueiredo:1 formal:1 weaker:2 absolute:1 sparse:28 distributed:2 regard:1 dimension:1 transition:1 made:1 adaptive:1 san:2 far:2 transaction:5 approximate:4 r1t:1 ml:1 global:3 conceptual:1 assumed:1 search:1 why:2 additionally:2 ca:1 correlated:1 necessarily:5 brao:1 diag:2 did:1 dense:1 linearly:1 motivation:1 noise:1 hyperparameters:4 referred:1 fails:1 position:1 sub:1 sparsest:1 candidate:1 lie:1 third:2 weighting:1 unwieldy:1 theorem:1 showing:1 wk22:1 maxi:2 r2:2 evidence:5 exists:1 effectively:1 niques:1 flattened:2 phd:1 magnitude:8 nat:1 illustrates:2 sparseness:1 suited:1 simply:1 likely:6 forming:1 penrose:1 failed:1 contained:1 applies:1 satisfies:5 chance:1 goal:1 identity:1 consequently:3 donoho:2 towards:2 considerable:1 feasible:4 included:1 specifically:1 infinite:1 uniformly:3 determined:1 ece:1 invariance:1 experimental:2 succeeds:1 formally:1 select:1 support:2 latter:1 relevance:1 violated:1 absolutely:1 requisite:1 tested:2 handling:1 |
1,951 | 2,772 | Describing Visual Scenes using
Transformed Dirichlet Processes
Erik B. Sudderth, Antonio Torralba, William T. Freeman, and Alan S. Willsky
Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
[email protected], [email protected], [email protected], [email protected]
Abstract
Motivated by the problem of learning to detect and recognize objects
with minimal supervision, we develop a hierarchical probabilistic model
for the spatial structure of visual scenes. In contrast with most existing
models, our approach explicitly captures uncertainty in the number of
object instances depicted in a given image. Our scene model is based on
the transformed Dirichlet process (TDP), a novel extension of the hierarchical DP in which a set of stochastically transformed mixture components are shared between multiple groups of data. For visual scenes,
mixture components describe the spatial structure of visual features in an
object?centered coordinate frame, while transformations model the object positions in a particular image. Learning and inference in the TDP,
which has many potential applications beyond computer vision, is based
on an empirically effective Gibbs sampler. Applied to a dataset of partially labeled street scenes, we show that the TDP?s inclusion of spatial
structure improves detection performance, flexibly exploiting partially
labeled training images.
1
Introduction
In this paper, we develop methods for analyzing the features composing a visual scene,
thereby localizing and categorizing the objects in an image. We would like to design learning algorithms that exploit relationships among multiple, partially labeled object categories
during training. Working towards this goal, we propose a hierarchical probabilistic model
for the expected spatial locations of objects, and the appearance of visual features corresponding to each object. Given a new image, our model provides a globally coherent
explanation for the observed scene, including estimates of the location and category of an
a priori unknown number of objects.
This generative approach is motivated by the pragmatic need for learning algorithms which
require little manual supervision and labeling. While discriminative models may produce
accurate classifiers, they typically require very large training sets even for relatively simple categories [1]. In contrast, generative approaches can discover large, visually salient
categories (such as foliage and buildings [2]) without supervision. Partial segmentations
can then be used to learn semantically interesting categories (such as cars and pedestrians)
which are less visually distinctive, or present in fewer training images. Moreover, generative models provide a natural framework for learning contextual relationships between
objects, and transferring knowledge between related, but distinct, visual scenes.
Constellation
LDA
Transformed DP
Figure 1: A scene with faces as described by three generative models. Constellation: Fixed parts
of a single face in unlocalized clutter. LDA: Bag of unlocalized face and background features. TDP:
Spatially localized clusters of background clutter, and one or more faces (in this case, the sample
contains one face and two background clusters). Note: The LDA and TDP images are sampled from
models learned from training images, while the Constellation image is a hand-constructed illustration.
The principal challenge in developing hierarchical models for scenes is specifying tractable,
scalable methods for handling uncertainty in the number of objects. This issue is entirely
ignored by most existing models. We address this problem using Dirichlet processes [3], a
tool from nonparametric Bayesian analysis for learning mixture models whose number of
components is not fixed, but instead estimated from data. In particular, we extend the recently proposed hierarchical Dirichlet process (HDP) [4, 5] framework to allow more flexible sharing of mixture components between images. The resulting transformed Dirichlet
process (TDP) is naturally suited to our scene understanding application, as well as many
other domains where ?style and content? are combined to produce the observed data [6].
We begin in Sec. 2 by reviewing several related generative models for objects and scenes.
Sec. 3 then introduces Dirichlet processes and develops the TDP model, including MCMC
methods for learning and inference. We specialize the TDP to visual scenes in Sec. 4, and
conclude in Sec. 5 by demonstrating object recognition and segmentation in street scenes.
2
Generative Models for Objects and Scenes
Constellation models [7] describe single objects via the appearance of a fixed, and typically small, set of spatially constrained parts (see Fig. 1). Although they can successfully
recognize objects in cluttered backgrounds, they do not directly provide a mechanism for
detecting multiple object instances. In addition, it seems difficult to generalize the fixed set
of constellation parts to problems where the number of objects is uncertain.
Grammars, and related rule?based systems, were one of the earliest approaches to scene
understanding [8]. More recently, distributions over hierarchical tree?structured partitions
of image pixels have been used to segment simple scenes [9, 10]. In addition, an image
parsing [11] framework has been proposed which explains an image using a set of regions
generated by generic or object?specific processes. While this model allows uncertainty in
the number of regions, and hence the number of objects, the high dimensionality of the
model state space requires good, discriminatively trained bottom?up proposal distributions
for acceptable MCMC performance. We also note that the BLOG language [12] provides
a promising framework for reasoning about unknown objects. As of yet, however, the
computational tools needed to apply BLOG to large?scale applications are unavailable.
Inspired by techniques from the text analysis literature, several recent papers analyze scenes
using a spatially unstructured bag of features extracted from local image patches (see
Fig. 1). In particular, latent Dirichlet allocation (LDA) [13] describes the features xji in
image j using a K component mixture model with parameters ?k . Each image reuses these
same mixture parameters in different proportions ?j (see the graphical model of Fig. 2).
By appropriately defining these shared mixtures, LDA may be used to discover object cat-
egories from images of single objects [2], categorize natural scenes [14], and (with a slight
extension) parse presegmented captioned images [15].
While these LDA models are sometimes effective, their neglect of spatial structure ignores
valuable information which is critical in challenging object detection tasks. We recently
proposed a hierarchical extension of LDA which learns shared parts describing the internal
structure of objects, and contextual relationships among known groups of objects [16]. The
transformed Dirichlet process (TDP) addresses a key limitation of this model by allowing
uncertainty in the number and identity of the objects depicted in each image. As detailed
in Sec. 4 and illustrated in Fig. 1, the TDP effectively provides a textural model in which
locally unstructured clumps of features are given global spatial structure by the inferred set
of objects underlying each scene.
3
Hierarchical Modeling using Dirichlet Processes
In this section, we review Dirichlet process mixture models (Sec. 3.1) and previously proposed hierarchical extensions (Sec. 3.2). We then introduce the transformed Dirichlet process (TDP) (Sec. 3.3), and discuss Monte Carlo methods for learning TDPs (Sec. 3.4).
3.1 Dirichlet Process Mixture Models
Let ? denote a parameter taking values in some space ?, and H be a measure on ?. A
Dirichlet process (DP), denoted by DP(?, H), is then a distribution over measures on ?,
where the concentration parameter ? controls the similarity of samples G ? DP(?, H)
to the base measure H. Samples from DPs are discrete with probability one, a property
highlighted by the following stick?breaking construction [4]:
?
k?1
X
Y
G(?) =
?k ?(?, ?k )
?k0 ? Beta(1, ?)
?k = ?k0
(1 ? ?`0 )
(1)
k=1
`=1
Each parameter ?k ? H is independently sampled, while the weights ? = (?1 , ?2 , . . .) use
Beta random variables to partition a unit?length ?stick? of probability mass.
In nonparametric Bayesian statistics, DPs are commonly used as prior distributions for
mixture models with an unknown number of components [3]. Let F (?) denote a family
of distributions parameterized by ?. Given G ? DP(?, H), each observation xi from
an exchangeable data set x is generated by first choosing a parameter ??i ? G, and then
sampling xi ? F (??i ). Computationally, this process is conveniently described by a set z of
independently sampled variables zi ? Mult(?) indicating the component of the mixture
G(?) (see eq. (1)) associated with each data point xi ? F (?zi ).
Integrating over G, the indicator variables z demonstrate an important clustering property.
Letting nk denote the number of times component" ?k is chosen by the first (i ?
# 1) samples,
X
1
?
p (zi | z1 , . . . , zi?1 , ?) =
nk ?(zi , k) + ??(zi , k)
(2)
?+i?1
k
Here, k? indicates a previously unused mixture component (a priori, all unused components
are equivalent). This process is sometimes described by analogy to a Chinese restaurant
in which the (infinite collection of) tables correspond to the mixture components ?k , and
customers to observations xi [4]. Customers are social, tending to sit at tables with many
other customers (observations), and each table shares a single dish (parameter).
3.2 Hierarchical Dirichlet Processes
In many domains, there are several groups of data produced by related, but distinct,
generative processes. For example, in this paper?s applications each group is an image, and the data are visual features composing a scene. Given J groups of data, let
xj = (xj1 , . . . , xjnj ) denote the nj exchangeable data points in group j.
Hierarchical Dirichlet processes (HDPs) [4, 5] describe grouped data with a coupled set of
?
?
?
H
?j
?
?
?
H
?j
?
?
R
?j
kjt
?k
kjt
zji
?
tji
?k
K
xji
nj J
LDA
?jt
tji
?k
?
xji
nj
xji
J
Hierarchical DP
nj
J
?
H
Transformed DP
?
?j
kjt
R
tji
?jt
?k
?
?k
?
?
?
oji
?
?o
O
wji
?k
yji
nj
H
J
Visual Scene TDP
Figure 2: Graphical representations of the LDA, HDP, and TDP models for sharing mixture components ?k , with proportions ?j , among J groups of exchangeable data xj = (xj1 , . . . , xjnj ). LDA
directly assigns observations xji to clusters via indicators zji . HDP and TDP models use ?table? indicators tji as an intermediary between observations and assignments kjt to an infinite global mixture
with weights ?. TDPs augment each table t with a transformation ?jt sampled from a distribution
parameterized by ?kjt . Specializing the TDP to visual scenes (right), we model the position yji and
appearance wji of features using distributions ?o indexed by unobserved object categories oji .
mixture models. To construct an HDP, a global probability measure G0 ? DP(?, H) is
first chosen to define a set of shared mixture components. A measure Gj ? DP(?, G0 ) is
then independently sampled for each group. Because G0 is discrete (as in eq. (1)), groups
Gj will reuse the same mixture components ?k in different proportions:
?
X
Gj (?) =
?jk ?(?, ?k )
?j ? DP(?, ?)
(3)
k=1
In this construction, shared components improve generalization when learning from few
examples, while distinct mixture weights capture differences between groups.
The generative process underlying HDPs may be understood in terms of an extension of the
DP analogy known as the Chinese restaurant franchise [4]. Each group defines a separate
restaurant in which customers (observations) xji sit at tables tji . Each table shares a single
dish (parameter) ?, which is ordered from a menu G0 shared among restaurants (groups).
Letting kjt indicate the parameter ?kjt assigned to table t in group j, we may integrate over
G0 and Gj (as in eq. (2)) to find the conditional distributions of these indicator variables:
X
p (tji | tj1 , . . . , tji?1 , ?) ?
njt ?(tji , t) + ??(tji , t?)
(4)
t
p (kjt | k1 , . . . , kj?1 , kj1 , . . . , kjt?1 , ?) ?
X
?
mk ?(kjt , k) + ??(kjt , k)
(5)
k
Here, mk is the number of tables previously assigned to ?k . As before, customers prefer
tables t at which many customers njt are already seated (eq. (4)), but sometimes choose a
new table t?. Each new table is assigned a dish kj t? according to eq. (5). Popular dishes are
more likely to be ordered, but a new dish ?k? ? H may also be selected.
The HDP generative process is summarized in the graphical model of Fig. 2. Given the
assignments tj and kj for group j, observations are sampled as xji ? F (?zji ), where
zji = kjtji indexes the shared parameters assigned to the table associated with xji .
3.3
Transformed Dirichlet Processes
In the HDP model of Fig. 2, the group distributions Gj are derived from the global distribution G0 by resampling the mixture weights from a Dirichlet process (see eq. (3)), leaving
the component parameters ?k unchanged. In many applications, however, it is difficult to
define ? so that parameters may be exactly reused between groups. Consider, for example,
a Gaussian distribution describing the location at which object features are detected in an
image. While the covariance of that distribution may stay relatively constant across object instances, the mean will change dramatically from image to image (group to group),
depending on the objects? position relative to the camera.
Motivated by these difficulties, we propose the Transformed Dirichlet Process (TDP), an
extension of the HDP in which global mixture components undergo a set of random transformations before being reused in each group. Let ? denote a transformation of the parameter vector ? ? ?, ? ? ? the parameters of a distribution Q over transformations, and R
a measure on ?. We begin by augmenting the DP stick?breaking construction of eq. (1) to
create a global measure describing both parameters and transformations:
?
X
G0 (?, ?) =
?k ?(?, ?k )q(? | ?k )
?k ? H
?k ? R
(6)
k=1
As before, ? is sampled from a stick?breaking process with parameter ?. For each group,
we then sample a measure Gj ? DP(?, G0 ). Marginalizing over transformations ?, Gj (?)
reuses parameters from G0 (?) exactly as in eq. (3). Because samples from DPs are discrete,
the joint measure for group j then has the following form:
#
"?
?
?
X
X
X
?jk` ?(?, ?jk` )
?jk` = 1
(7)
Gj (?, ?) =
?jk ?(?, ?k )
k=1
`=1
`=1
Note that within the j th group, each shared parameter vector ?k may potentially be reused
multiple times with different transformations ?jk` . Conditioning on ?k , it can be shown
that Gj (? | ?k ) ? DP(??k , Q(?k )), so that the proportions ? jk of features associated
with each transformation of ?k follow a stick?breaking process with parameter ??k .
Each observation xji is now generated by sampling (??ji , ??ji ) ? Gj , and then choosing
xji ? F (??ji , ??ji ) from a distribution which transforms ??ji by ??ji . Although the global
family of transformation distributions Q(?) is typically non?atomic, the discreteness of
Gj ensures that transformations are shared between observations within group j.
Computationally, the TDP is more conveniently described via an extension of the Chinese
restaurant franchise analogy (see Fig. 2). As before, customers (observations) xji sit at
tables tji according to the clustering bias of eq. (4), and new tables choose dishes according
to their popularity across the franchise (eq. (5)). Now, however, the dish (parameter) ?kjt
at table t is seasoned (transformed) according to ?jt ? q(?jt | ?kjt ). Each time a dish is
ordered, the recipe is seasoned differently.
3.4
Learning via Gibbs Sampling
To learn the parameters of a TDP, we extend the HDP Gibbs sampler detailed in [4]. The
simplest implementation samples table assignments t, cluster assignments k, transformations ?, and parameters ?, ?. Let t?ji denote all table assignments excluding tji , and
define k?jt , ??jt similarly. Using the Markov properties of the TDP (see Fig. 2), we have
?
?
?
? ?
?
p tji = t | t?ji , k, ?, ?, x ? p t | t?ji f xji | ?kjt , ?jt
(8)
The first term is given by eq. (4). For a fixed set of transformations ?, the second term is
a simple likelihood evaluation for existing tables, while new tables may be evaluated by
marginalizing over possible cluster assignments (eq. (5)).
Because cluster assignments kjt and transformations ?jt are strongly coupled in the posterior, a blocked Gibbs sampler which jointly resamples them converges much more rapidly:
Y
?
?
?
?
p kjt = k, ?jt | k?jt , ??jt , t, ?, ?, x ? p k | k?jt q (?jt | ?k )
f (xji | ?k , ?jt )
tji =t
For the models considered in this paper, F is conjugate to Q for any fixed observation
value. We may thus analytically integrate over ?jt and, combined with eq. (5), sample a
Training Data
HDP
TDP
Figure 3: Comparison of hierarchical models learned via Gibbs sampling from synthetic 2D data.
Left: Four of 50 ?images? used for training. Center: Global distribution G0 (?) for the HDP, where
ellipses are covariance estimates and intensity is proportional to prior probability. Right: Global TDP
distribution G0 (?, ?) over both clusters ? (solid) and translations ? of those clusters (dashed).
new cluster assignment k?jt . Conditioned on k?jt , we again use conjugacy to sample ??jt . We
also choose the parameter priors H and R to be conjugate to Q and F , respectively, so that
standard formulas may be used to resample ?, ?.
4
4.1
Transformed Dirichlet Processes for Visual Scenes
Context?Free Modeling of Multiple Object Categories
In this section, we adapt the TDP model of Sec. 3.3 to describe the spatial structure of
visual scenes. Groups j now correspond to training, or test, images. For the moment,
we assume that the observed data xji = (oji , yji ), where yji is the position of a feature
corresponding to object category oji , and the number of object categories O is known (see
Fig. 2). We then choose cluster parameters ?k = (?
ok , ?k , ?k ) to describe the mean ?k and
covariance ?k of a Gaussian distribution over feature positions, as well as the single object
category o?k assigned to all observations sampled from that cluster. Although this cluster
parameterization does not capture contextual relationships between object categories, the
results of Sec. 5 demonstrate that it nevertheless provides an effective model of the spatial
variability of individual categories across many different scenes.
To model the variability in object location from image to image, transformation parameters
?jt are defined to translate feature position relative to that cluster?s ?canonical? mean ?k :
?
?
?
?
p oji , yji | tji = t, kj , ?j , ? = ?(oji , o?kjt ) ? N yji ; ?kjt + ?jt , ?kjt
(9)
We note that there is a different translation ?jt associated with each table t, allowing the
same object cluster to be reused at multiple locations within a single image. This flexibility, which is not possible with HDPs, is critical to accurately modeling visual scenes.
Density models for spatial transformations have been previously used to recognize isolated
objects [17], and estimate layered decompositions of video sequences [18]. In contrast, the
proposed TDP models the variability of object positions across scenes, and couples this
with a nonparametric prior allowing uncertainty in the number of objects.
To ensure that the TDP scene model is identifiable, we define p (?jt | kj , ?) to be a zero?
mean Gaussian with covariance ?kjt . The parameter prior R is uniform across object categories, while R and H both use inverse?Wishart position distributions, weakly biased
towards moderate covariances. Fig. 3 shows a 2D synthetic example based on a single
object category (O = 1). Following 100 Gibbs sampling iterations, the TDP correctly discovers that the data is composed of elongated ?bars? in the upper right, and round ?blobs?
in the lower left. In contrast, the learned HDP uses a large set of global clusters to discretize
the transformations underlying the data, and thus generalizes poorly to new translations.
4.2
Detecting Objects from Image Features
To apply the TDP model of Sec. 4.1 to images, we must learn the relationship between object categories and visual features. As in [2, 16], we obtain discrete features by vector quantizing SIFT descriptors [19] computed over locally adapted elliptical regions. To improve
discriminative power, we divide these elliptical regions into three groups (roughly circu-
lar, and horizontally or vertically elongated) prior to quantizing SIFT values, producing a
discrete vocabulary with 1800 appearance ?words?. Given the density of feature detection,
these descriptors essentially provide a multiscale over?segmentation of the image.
We assume that the appearance wji of each detected feature is independently sampled conditioned on the underlying object category oji (see Fig. 2). Placing a symmetric Dirichlet
prior, with parameter ?,?on each category?s multinomial appearance
distribution ?o ,
?
p wji = b | oji = o, w?ji , t, k, ? ? cbo + ?
(10)
where cbo is the number of times feature b is currently assigned to object o. Because a
single object category is associated with each cluster, the Gibbs sampler of Sec. 3.4 may be
easily adapted to this case by incorporating eq. (10) into the assignment likelihoods.
5
Analyzing Street Scenes
To demonstrate the potential of our TDP scene model, we consider a set of street scene
images (250 training, 75 test) from the MIT-CSAIL database. These images contain three
?objects?: buildings, cars (side views), and roads. All categories were labeled in 112
images, while in the remainder only cars were segmented. Training from semi?supervised
data is accomplished by restricting object category assignments for segmented features.
Fig. 4 shows the four global object clusters learned following 100 Gibbs sampling iterations. There is one elongated car cluster, one large building cluster, and two road clusters
with differing shapes. Interestingly, the model has automatically determined that building
features occur in large homogeneous patches, while road features are sparse and better described by many smaller transformed clusters. To segment test images, we run the Gibbs
sampler for 50 iterations from each of 10 random initializations. Fig. 4 shows segmentations produced by averaging these samples, as well as transformed clusters from the final
iteration. Qualitatively, results are typically good, although foliage is often mislabeled as
road due to the textural similarities with features detected in shadows across roads.
For comparison, we also trained an LDA model based solely on feature appearance, allowing three topics per object category and again using object labels to restrict the Gibbs
sampler?s assignments [16]. As shown by the ROC curves of Fig. 4, our TDP model of spatial scene structure significantly improves segmentation performance. In addition, through
the set of transformed car clusters generated by the Gibbs sampler, the TDP explicitly estimates the number of object instances underlying each image. These detections, which are
not possible using LDA, are based on a single global parsing of the scene which automatically estimates object locations without a ?sliding window? [1].
6
Discussion
We have developed the transformed Dirichlet process, a hierarchical model which shares a
set of stochastically transformed clusters among groups of data. Applied to visual scenes,
TDPs provide a model of spatial structure which allows the number of objects generating
an image to be automatically inferred, and lead to improved detection performance. We
are currently investigating extensions of the basic TDP scene model presented in this paper
which describe the internal structure of objects, and also incorporate richer contextual cues.
Acknowledgments
Funding provided by the National Geospatial-Intelligence Agency NEGI-1582-04-0004, the National
Science Foundation NSF-IIS-0413232, the ARDA VACE program, and a grant from BAE Systems.
References
[1] P. Viola and M. J. Jones. Robust real?time face detection. IJCV, 57(2):137?154, 2004.
[2] J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and W. T. Freeman. Discovering objects and
their location in images. In ICCV, 2005.
Road
Building
Car
Road
1
0.9
Detection Rate
0.8
0.7
0.6
Car (TDP)
Building (TDP)
Road (TDP)
Car (LDA)
Building (LDA)
Road (LDA)
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.4
0.6
False Alarm Rate
0.8
1
Figure 4: TDP analysis of street scenes containing cars (red), buildings (green), and roads (blue).
Top right: Global model G0 describing object shape (solid) and expected transformations (dashed).
Bottom right: ROC curves comparing TDP feature segmentation performance to an LDA model of
feature appearance. Left: Four test images (first row), estimated segmentations of features into object
categories (second row), transformed global clusters associated with each image interpretation (third
row), and features assigned to different instances of the transformed car cluster (fourth row).
[3] M. D. Escobar and M. West. Bayesian density estimation and inference using mixtures. J.
Amer. Stat. Assoc., 90(430):577?588, June 1995.
[4] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Technical
Report 653, U.C. Berkeley Statistics, October 2004.
[5] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. In NIPS
17, pages 1385?1392. MIT Press, 2005.
[6] J. B. Tenenbaum and W. T. Freeman. Separating style and content with bilinear models. Neural
Comp., 12:1247?1283, 2000.
[7] L. Fei-Fei, R. Fergus, and P. Perona. A Bayesian approach to unsupervised one-shot learning
of object categories. In ICCV, volume 2, pages 1134?1141, 2003.
[8] J. M. Tenenbaum and H. G. Barrow. Experiments in interpretation-guided segmentation. Artif.
Intel., 8:241?274, 1977.
[9] A. J. Storkey and C. K. I. Williams. Image modeling with position-encoding dynamic trees.
IEEE Trans. PAMI, 25(7):859?871, July 2003.
[10] J. M. Siskind et al. Spatial random tree grammars for modeling hierarchal structure in images.
Submitted to IEEE Tran. PAMI, 2004.
[11] Z. Tu, X. Chen, A. L. Yuille, and S. C. Zhu. Image parsing: Unifying segmentation, detection,
and recognition. In ICCV, volume 1, pages 18?25, 2003.
[12] B. Milch, B. Marthi, S. Russell, D. Sontag, D. L. Ong, and A. Kolobov. BLOG: Probabilistic
models with unknown objects. In IJCAI 19, pages 1352?1359, 2005.
[13] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. JMLR, 3:993?1022, 2003.
[14] L. Fei-Fei and P. Perona. A Bayesian hierarchical model for learning natural scene categories.
In CVPR, volume 2, pages 524?531, 2005.
[15] K. Barnard et al. Matching words and pictures. JMLR, 3:1107?1135, 2003.
[16] E. B. Sudderth, A. Torralba, W. T. Freeman, and A. S. Willsky. Learning hierarchical models
of scenes, objects, and parts. In ICCV, 2005.
[17] E. G. Miller, N. E. Matsakis, and P. A. Viola. Learning from one example through shared
densities on transforms. In CVPR, volume 1, pages 464?471, 2000.
[18] N. Jojic and B. J. Frey. Learning flexible sprites in video layers. In CVPR, volume 1, pages
199?206, 2001.
[19] D. G. Lowe. Distinctive image features from scale?invariant keypoints. IJCV, 60(2):91?110,
2004.
| 2772 |@word seems:1 proportion:4 reused:4 covariance:5 decomposition:1 thereby:1 solid:2 shot:1 moment:1 contains:1 interestingly:1 existing:3 elliptical:2 contextual:4 comparing:1 yet:1 must:1 parsing:3 partition:2 shape:2 resampling:1 generative:9 fewer:1 selected:1 cue:1 parameterization:1 tdp:35 intelligence:1 discovering:1 esuddert:1 geospatial:1 blei:3 provides:4 detecting:2 location:7 constructed:1 beta:2 specialize:1 ijcv:2 introduce:1 expected:2 xji:14 roughly:1 freeman:4 globally:1 inspired:1 automatically:3 little:1 window:1 begin:2 discover:2 moreover:1 underlying:5 provided:1 mass:1 developed:1 differing:1 unobserved:1 transformation:18 nj:5 berkeley:1 exactly:2 classifier:1 assoc:1 stick:5 control:1 unit:1 exchangeable:3 reuses:2 grant:1 producing:1 before:4 engineering:1 frey:1 local:1 textural:2 understood:1 vertically:1 bilinear:1 encoding:1 analyzing:2 solely:1 pami:2 initialization:1 specifying:1 challenging:1 clump:1 acknowledgment:1 camera:1 atomic:1 mult:1 significantly:1 matching:1 word:2 integrating:1 road:10 layered:1 context:1 milch:1 equivalent:1 elongated:3 customer:7 center:1 williams:1 flexibly:1 cluttered:1 independently:4 unstructured:2 assigns:1 rule:1 siskind:1 menu:1 coordinate:1 construction:3 homogeneous:1 us:1 storkey:1 recognition:2 jk:7 labeled:4 database:1 observed:3 bottom:2 electrical:1 capture:3 region:4 ensures:1 russell:2 valuable:1 cbo:2 agency:1 ong:1 dynamic:1 trained:2 weakly:1 reviewing:1 segment:2 yuille:1 distinctive:2 mislabeled:1 easily:1 joint:1 k0:2 differently:1 cat:1 distinct:3 describe:6 effective:3 monte:1 detected:3 labeling:1 choosing:2 whose:1 richer:1 cvpr:3 grammar:2 statistic:2 highlighted:1 jointly:1 final:1 beal:2 sequence:1 blob:1 quantizing:2 propose:2 tran:1 remainder:1 tu:1 tj1:1 rapidly:1 translate:1 flexibility:1 poorly:1 recipe:1 exploiting:1 ijcai:1 cluster:26 produce:2 generating:1 escobar:1 franchise:3 converges:1 object:64 depending:1 develop:2 stat:1 augmenting:1 kolobov:1 eq:14 shadow:1 indicate:1 foliage:2 guided:1 tji:14 centered:1 explains:1 require:2 generalization:1 extension:8 considered:1 visually:2 efros:1 torralba:3 resample:1 estimation:1 intermediary:1 bag:2 label:1 kjt:20 currently:2 grouped:1 xjnj:2 create:1 successfully:1 tool:2 mit:6 gaussian:3 earliest:1 categorizing:1 derived:1 june:1 vace:1 indicates:1 likelihood:2 contrast:4 detect:1 inference:3 typically:4 transferring:1 perona:2 captioned:1 transformed:19 pixel:1 issue:1 among:5 flexible:2 denoted:1 priori:2 augment:1 spatial:12 constrained:1 construct:1 ng:1 sampling:6 placing:1 jones:1 presegmented:1 unsupervised:1 report:1 develops:1 few:1 composed:1 recognize:3 national:2 individual:1 william:1 detection:8 evaluation:1 introduces:1 mixture:22 tj:1 accurate:1 partial:1 tree:3 indexed:1 divide:1 isolated:1 minimal:1 uncertain:1 mk:2 instance:5 modeling:5 localizing:1 assignment:11 uniform:1 synthetic:2 combined:2 density:4 csail:2 stay:1 probabilistic:3 again:2 containing:1 choose:4 wishart:1 stochastically:2 style:2 potential:2 sec:13 summarized:1 pedestrian:1 explicitly:2 view:1 lowe:1 analyze:1 red:1 descriptor:2 miller:1 correspond:2 generalize:1 bayesian:5 accurately:1 produced:2 carlo:1 comp:1 submitted:1 manual:1 sharing:2 naturally:1 associated:6 couple:1 sampled:9 dataset:1 massachusetts:1 popular:1 knowledge:1 car:10 dimensionality:1 improves:2 segmentation:9 ok:1 supervised:1 follow:1 zisserman:1 improved:1 amer:1 evaluated:1 strongly:1 working:1 hand:1 parse:1 multiscale:1 oji:8 defines:1 lda:16 lar:1 artif:1 building:8 xj1:2 contain:1 hence:1 assigned:7 analytically:1 spatially:3 symmetric:1 jojic:1 illustrated:1 round:1 during:1 demonstrate:3 reasoning:1 image:44 resamples:1 novel:1 recently:3 discovers:1 funding:1 tending:1 multinomial:1 empirically:1 ji:10 conditioning:1 volume:5 extend:2 slight:1 interpretation:2 blocked:1 gibbs:11 similarly:1 inclusion:1 language:1 supervision:3 similarity:2 gj:11 base:1 posterior:1 recent:1 moderate:1 dish:8 hierarchal:1 blog:3 accomplished:1 wji:4 dashed:2 semi:1 sliding:1 multiple:6 ii:1 july:1 keypoints:1 alan:1 segmented:2 technical:1 adapt:1 specializing:1 ellipsis:1 scalable:1 basic:1 vision:1 essentially:1 iteration:4 sometimes:3 proposal:1 background:4 addition:3 sudderth:2 leaving:1 appropriately:1 biased:1 undergo:1 arda:1 jordan:3 unused:2 unlocalized:2 xj:2 restaurant:5 zi:6 restrict:1 motivated:3 reuse:1 sontag:1 sprite:1 antonio:1 ignored:1 dramatically:1 detailed:2 transforms:2 clutter:2 nonparametric:3 locally:2 tenenbaum:2 category:24 egories:1 simplest:1 canonical:1 nsf:1 estimated:2 hdps:3 popularity:1 correctly:1 per:1 blue:1 discrete:5 group:26 key:1 salient:1 four:3 demonstrating:1 nevertheless:1 discreteness:1 bae:1 run:1 inverse:1 parameterized:2 uncertainty:5 fourth:1 family:2 patch:2 acceptable:1 prefer:1 entirely:1 layer:1 identifiable:1 adapted:2 occur:1 fei:4 scene:39 relatively:2 department:1 developing:1 structured:1 according:4 conjugate:2 describes:1 across:6 smaller:1 iccv:4 invariant:1 computationally:2 conjugacy:1 previously:4 describing:5 discus:1 mechanism:1 needed:1 letting:2 tractable:1 generalizes:1 apply:2 hierarchical:18 generic:1 matsakis:1 top:1 dirichlet:24 clustering:2 ensure:1 graphical:3 unifying:1 neglect:1 exploit:1 k1:1 chinese:3 kj1:1 unchanged:1 g0:12 already:1 concentration:1 dp:18 separate:1 separating:1 street:5 topic:1 willsky:3 erik:1 hdp:11 length:1 index:1 relationship:5 illustration:1 difficult:2 october:1 potentially:1 design:1 implementation:1 unknown:4 allowing:4 upper:1 discretize:1 observation:12 teh:2 markov:1 barrow:1 defining:1 viola:2 excluding:1 variability:3 frame:1 intensity:1 inferred:2 z1:1 sivic:1 coherent:1 learned:4 marthi:1 nip:1 trans:1 address:2 beyond:1 bar:1 negi:1 challenge:1 program:1 including:2 green:1 explanation:1 video:2 power:1 critical:2 natural:3 difficulty:1 indicator:4 zhu:1 improve:2 technology:1 picture:1 coupled:2 kj:5 text:1 review:1 understanding:2 literature:1 prior:7 marginalizing:2 relative:2 billf:1 discriminatively:1 interesting:1 limitation:1 allocation:2 proportional:1 analogy:3 localized:1 foundation:1 integrate:2 njt:2 share:3 seated:1 translation:3 row:4 free:1 bias:1 allow:1 side:1 institute:1 face:6 taking:1 sparse:1 curve:2 vocabulary:1 ignores:1 commonly:1 collection:1 qualitatively:1 social:1 global:14 investigating:1 conclude:1 discriminative:2 xi:4 fergus:1 yji:6 latent:2 table:21 promising:1 learn:3 robust:1 composing:2 unavailable:1 domain:2 alarm:1 fig:14 west:1 intel:1 roc:2 position:9 breaking:4 jmlr:2 third:1 learns:1 formula:1 specific:1 jt:23 sift:2 constellation:5 sit:3 incorporating:1 restricting:1 false:1 effectively:1 conditioned:2 nk:2 zji:4 chen:1 suited:1 depicted:2 appearance:8 likely:1 visual:16 conveniently:2 horizontally:1 ordered:3 partially:3 extracted:1 conditional:1 goal:1 identity:1 towards:2 shared:10 barnard:1 content:2 change:1 infinite:2 determined:1 semantically:1 sampler:7 averaging:1 principal:1 indicating:1 pragmatic:1 kjtji:1 internal:2 categorize:1 incorporate:1 mcmc:2 handling:1 |
1,952 | 2,773 | Silicon Growth Cones Map Silicon Retina
Brian Taba and Kwabena Boahen?
Department of Bioengineering
University of Pennsylvania
Philadelphia, PA 19104
{btaba,boahen}@seas.upenn.edu
Abstract
We demonstrate the first fully hardware implementation of retinotopic
self-organization, from photon transduction to neural map formation.
A silicon retina transduces patterned illumination into correlated spike
trains that drive a population of silicon growth cones to automatically
wire a topographic mapping by migrating toward sources of a diffusible
guidance cue that is released by postsynaptic spikes. We varied the pattern of illumination to steer growth cones projected by different retinal
ganglion cell types to self-organize segregated or coordinated retinotopic
maps.
1
Introduction
Engineers have long admired the brain?s ability to effortlessly adapt to novel situations
without instruction, and sought to endow digital computers with a similar capacity for unsupervised self-organization. One prominent example is Kohonen?s self-organizing map
[1], which achieved popularity by distilling neurophysiological insights into a simple set
of mathematical equations. Although these algorithms are readily simulated in software,
previous hardware implementations have required high precision components that are expensive in chip area (e.g. [2, 3]). By contrast, neurobiological systems can self-organize
components possessing remarkably heterogeneous properties. To pursue this biological robustness against component mismatch, we designed circuits that mimic neurophysiological
function down to the subcellular level. In this paper, we demonstrate topographic refinement of connections between a silicon retina and the first neuromorphic self-organizing
map chip, previously reported in [5], which is based on axon migration in the developing
brain.
During development, neurons wire themselves into their mature circuits by extending axonal and dendritic precursors called neurites. Each neurite is tipped by a motile sensory
structure called a growth cone that guides the elongating neurite based on local chemical cues. Growth cones move by continually sprouting and retracting finger-like extensions called filopodia whose dynamics can be biased by diffusible ligands in an activitydependent manner [4]. Based on these observations, we designed and fabricated the Neurotrope1 chip to implement a population of silicon growth cones [5]. We interfaced Neu?
www.neuroengineering.upenn.edu/boahen
target
x
n(x)
source
c(a)
a
a
b
c
d
Figure 1: Neurotropic axon guidance. a. Active source cells (grey) relay spikes down
their axons to their growth cones, which excite nearby target cells. b. Active target cell
bodies secrete neurotropin. c. Neurotropin spreads laterally, establishing a spatial concentration gradient that is sampled by active growth cones. d. Active growth cones climb the
local neurotropin gradient, translating temporal activity coincidence into spatial position
coincidence. Growth cones move by displacing other growth cones.
rotrope1 directly to a spiking silicon retina to illustrate its applicability to larger neuromorphic systems.
This paper is organized as follows. In Section 2, we present an algorithm for axon migration under the guidance of a diffusible chemical whose release and uptake is gated by
activity. In Section 3, we describe our hardware implementation of this algorithm. In Section 4, we examine the Neurotrope1 system?s performance on a topographic refinement
task when driven by spike trains generated by a silicon retina in response to several types
of illumination stimuli.
2
Neurotropic axon guidance
We model the self-organization of connections between two layers of neurons (Fig. 1).
Cells in the source layer innervate cells in the target layer with excitatory axons that are
tipped by motile growth cones. Growth cones tow their axons within the target layer as
directed by a diffusible guidance factor called neurotropin that they bind from the local
extracellular environment. Neurotropin is released by postsynaptically active target cell
bodies and bound by presynaptically active growth cones, so the retrograde transfer of
neurotropin from a target cell to a source cell measures the temporal coincidence of their
spike activities. Growth cones move to maximize their neurotropic uptake, a Hebbian-like
learning rule that causes cells that fire at the same time to wire to the same place. To
prevent the population of growth cones from attempting to trivially maximize their uptake
by all exciting the same target cell, we impose a synaptic density constraint that requires a
migrating growth cone to displace any other growth cone occupying its path.
To state the model more formally, source cell bodies occupy nodes of a regular twodimensional (2D) lattice embedded in the source layer, while growth cones and target cell
bodies occupy nodes on separate 2D lattices that are interleaved in the target layer. We index nodes by their positions in their respective layers, using Greek letters for source layer
positions (e.g., ? ? Z2 ) and Roman letters for target layer positions (e.g., x, c ? Z2 ).
Each source cell ? fires spikes at a rate aSC (?) and conveys this presynaptic activity down
an axon that elaborates an excitatory arbor in the target layer centered on c(?). In principle,
every branch of this arbor is tipped by its own motile growth cone, but to facilitate efficient
Silicon retina
RAM
0 3
Neurotrope1
1 0
2 4
GC
3 1
4 2
1 0
3 1
GC
N
4 2
0 3
2 4
N
GC
AER
GC
GC
N
N
AER
GC
USB
Computer
?C
a
b
c
Figure 2: a. Neurotrope1 system. Spike communication is by address-events (AER). b.
Neurotrope1 cell mosaic. The extracellular medium (grey) is laid out as a monolithic honeycomb lattice. Growth cones (GC) occupy nodes of this lattice and extend filopodia to the
adjacent nodes. Neurotropin receptors (black) are located at the tip of each filopodium and
at the growth cone body. Target cells (N) occupy nodes of an interleaved triangular lattice.
c. Detail of chip layout.
hardware implementation, we abstract the collection of branch growth cones into a single
central growth cone that tows the arbor?s trunk around the target layer, dragging the rest of
the arbor with it. The arbor overlaps nearby target cells with a branch density A(x ? c(?))
that diminishes with distance kx ? c(?)k from the arbor center. The postsynaptic activity
aTC(x) of target cell x is proportional to the linear sum of its excitation.
X
aSC (?)A(x ? c(?))
(1)
aTC(x) =
?
Postsynaptically active target cell bodies release neurotropin, which spreads laterally until
consumed by constitutive decay processes. The neurotropin n(x0 ) present at target site
x0 is assembled from contributions from all active release sites. The contribution of each
target cell x is proportional to its postsynaptic activity and weighted by a spreading kernel
N (x ? x0) that is a decreasing function of its distance kx ? x0 k from the measurement site
x0.
X
n(x0 ) =
aTC (x)N (x ? x0)
(2)
x
A presynaptically active growth cone located at c(?) computes the direction of the local
neurotropin gradient by identifying the adjacent lattice node c0 (?) ? C(c(?)) with the most
neurotropin, where C(c(?)) includes c(?) and its nearest neighbors.
c0(?) = arg maxx0 ?C(c(?)) n(x0)
(3)
Once the growth cone has identified c0 (?), it swaps positions with the growth cone already located at c0 (?), increasing its own neurotropic uptake while preserving a constant synaptic density. Growth cones compute position updates independently, at a rate
?(?) ? aSC (?) maxy?C(c(?)) n(x0 ). Updates are executed asynchronously, in order of
their arrival.
Software simulation of a similar set of equations generates self-organized feature maps
when driven by appropriately correlated source cell activity [6]. Here, we illustrate topographic map formation in hardware using correlated spike trains generated by a silicon
retina.
Route
3 0 4 1 2
0 1 2 3 4
Update
0 1 2 3 4
1 3 4 0 2
3 0 4 1 2
0 1 2 3 4
0 1 2 3 4
1 3 4 0 2
Route
2 0 4 1 3
0 1 2 3 4
0 1 2 3 4
1 3 0 4 2
a
b
c
Figure 3: Virtual axon remapping. a. Cell bodies tag their spikes with their own source
layer addresses, which the forward lookup table translates into target layer destinations.
b. Axon updates are computed by growth cones, which decode their own target layer
addresses through the reverse lookup table to obtain the source layer addresses of their cell
bodies that identify their entries in the forward lookup table. c. Growth cones move by
modifying their entries in the forward and reverse lookup tables to reroute their spikes to
updated locations.
3
Neurotrope1 system
Our hardware implementation splits the model into three stages: the source layer, the target
layer, and the intervening axons (Fig. 2a). Any population of spiking neurons can act
as a source layer; in this paper we employ the silicon retina of [7]. The target layer is
implemented by a full custom VLSI chip that interleaves a 48 ? 20 array of growth cone
circuits with a 24 ? 20 array of target cell circuits. There is also a spreading network
that represents the intervening medium for propagating neurotropin. The Neurotrope1 chip
was fabricated by MOSIS using the TSMC 0.35?m process and has an area of 11.5 mm2 .
Connections are specified as entries in a pair of lookup tables, stored in an off-chip RAM,
that are updated by a Ubicom ip2022 microcontroller as instructed by the Neurotrope1 chip.
The ip2022 also controls a USB link that allows a computer to write and read the contents
of the RAM. Subsection 3.1 explains how updates are computed by the Neurotrope1 chip
and Subsection 3.2 describes the procedure for executing these updates.
3.1 Axon updates
Axon updates are computed by the Neurotrope1 chip using the transistor circuits described
in [5]. Here, we provide a brief description. The Neurotrope1 chip represents neurotropin
as charge spreading through a monolithic transistor channel laid out as a honeycomb lattice. Each growth cone occupies one node of this lattice and extends filopodia to the three
adjacent nodes, expressing neurotropin receptors at all four locations (Fig. 2b-c). When
a growth cone receives a presynaptic spike, its receptor circuits tap charge from all four
nodes onto separate capacitors. The first capacitor voltage to integrate to a threshold resets
all of the growth cone?s capacitors and transmits a request off-chip to update the growth
cone?s position by swapping locations with the growth cone currently occupying the winning node.
3.2 Address-event remapping
Chips in the Neurotrope1 system exchange spikes encoded in the address-event representation (AER) [8], an asynchronous communication protocol that merges spike trains from
every cell on the same chip onto a single shared data link instead of requiring a dedicated
wire for each connection. Each spike is tagged with the address of its originating cell for
transmission off-chip. Between chips, spikes are routed through a forward lookup table
that translates their original source layer addresses into their destined target layer addresses
Retina color map
n=0
n=85
<FH n L >
25
20
15
10
5
n
25 50 75
a
b
c
d
Figure 4: Retinotopic self-organization of ON-center RGCs. a. Silicon retina color map
of ON-center RGC body positions. A representative RGC body is outlined in white, as
are the RGC neighbors that participate in its topographic order parameter ?(n). b. Target
layer color map of growth cone positions for sample n = 0, colored by the retinal positions
of their cell bodies. Growth cones projected by the representative RGC and its nearest
neighbors are outlined in white. Grey lines denote target layer distances used to compute
?(n) . c. Target layer color map at n = 85. d. Order parameter evolution.
on the receiving chip (Fig. 3a). An axon entry in this forward lookup table is indexed by
the source layer address of its cell body and contains the target layer address of its growth
cone. The virtual axon moves by updating this entry.
Axon updates are computed by growth cone circuits on the Neurotrope1 chip, encoded as
address-events, and sent to the ip2022 for processing. Each update identifies a pair of axon
terminals to be swapped. These growth cone addresses are translated through a reverse
lookup table into the source layer addresses that index the relevant forward lookup table
entries (Fig. 3b). Modification of the affected entries in each lookup table completes the
axon migration (Fig. 3c).
4
Retinotopic self-organization
We programmed the growth cone population to self-organize retinotopic maps by driving
them with correlated spike trains generated by the silicon retina. The silicon retina translates patterned illumination in real-time into spike trains that are fed into the Neurotrope1
chip as presynaptic input from different retinal ganglion cell (RGC) types. An ON-center
RGC is excited by a spot of light in the center of its receptive field and inhibited by light in
the surrounding annulus, while an OFF-center RGC responds analogously to the absence
of light. There is an ON-center and an OFF-center RGC located at every retinal coordinate.
To generate appropriately correlated RGC spike trains, we illuminated the silicon retina
with various mixtures of light and dark spot stimuli. Each spot stimulus was presented
against a uniformly grey background for 100 ms and covered a contiguous cluster of RGCs
centered on a pseudorandomly selected position in the retinal plane, eliciting overlapping
bursts of spikes whose coactivity established a spatially restricted presynaptic correlation
kernel containing enough information to instruct topographic ordering [9]. Strongly driven
RGCs could fire at nearly 1 kHz, which was the highest mean rate at which the silicon retina
could still be tuned to roughly balance ON- and OFF-center RGC excitability. We tracked
the evolution of the growth cone population by reading out the contents of the lookup table
every five minutes, a sampling interval selected to include enough patch stimuli to allow
each of the 48 ? 20 possible patches to be activated on average at least once per sample.
We first induced retinotopic self-organization within a single RGC cell type by illuminating the silicon retina with a sequence of randomly centered spots of light presented against
a grey background, selectively activating only ON-center RGCs. Each of the 960 growth
Rate H kHz L
RGC stimulus
n=0
n=310
ON-center
1
<FH n L >
25
20
x
15
OFF-center
1
10
5
n
100
x
a
b
c
d
200
300
e
Figure 5: Segregation by cell type under separate light and dark spot stimulation. Top:
ON-center; bottom: OFF-center. a. Silicon retina image of representative spot stimulus.
Light or dark intensity denotes relative ON- or OFF-center RGC output rate. b. Spike rates
for ON-center (grey) and OFF-center (black) RGCs in column x of a cross-section of a
representative spot stimulus. c. Target layer color maps of RGC growth cones at sample
n = 0. Black indicates the absence of a growth cone projected by an RGC of this cell type.
Other colors as in Fig. 4. d. Target layer color maps at n = 310. e. Order parameter
evolution for ON-center (grey) and OFF-center (black) RGCs.
cones was randomly assigned to a different ON-center RGC, creating a scrambled map
from retina to target layer (Fig. 4a-b). The ON-center RGC growth cone population visibly
refined the topography of the nonretinotopic initial state (Fig. 4c). We quantify this observation by introducing an order parameter ?(n) whose value measures the instantaneous
retinotopy for an RGC at the nth sample. The definition of retinotopy is that adjacent RGCs
innervate adjacent target cells, so we define ?(n) for a given RGC to be the average target
layer distance separating its growth cone from the growth cones projected by the six adjacent RGCs of the same cell type. The population average h?(n) i converges to a value that
represents the achievable performance on this task (Fig. 4d).
We next induced growth cones projected by each cell type to self-organize disjoint topographic maps by illuminating the silicon retina with a sequence of randomly centered light
or dark spots presented against a grey background (Fig. 5a-b). Half the growth cones
were assigned to ON-center RGCs and the other half were assigned to the corresponding
OFF-center RGCs. We seeded the system with a random projection that evenly distributed
growth cones of both cell types across the entire target layer (Fig. 5c). Since only RGCs of
the same cell type were coactive, growth cones segregated into ON- and OFF-center clusters on opposite sides of the target layer (Fig. 5d). OFF-center RGCs were slightly more
excitable on average than ON-center RGCs, so their growth cones refined their topography
more quickly (Fig. 5e) and clustered in the right half of the target layer, which was also
more excitable due to poor power distribution on the Neurotrope1 chip.
Finally, we induced growth cones of both cell types to self-organize coordinated retinotopic maps by illuminating the retina with center-surround stimuli that oscillate radially
from light to dark or vice versa (Fig. 6). The light-dark oscillation injected enough coactivity between neighboring ON- and OFF-center RGCs to prevent their growth cones from
segregating by cell type into disjoint clusters. Instead, both subpopulations developed and
maintained coarse retinotopic maps that cover the entire target layer and are oriented in
register with one another, properties sufficient to seed more interesting circuits such as
oriented receptive fields [10].
Performance in this hardware implementation is limited mainly by variability in the behav-
Rate H kHz L
RGC stimulus
n=0
n=335
ON-center
1
<FH n L >
25
20
x
15
OFF-center
1
10
5
n
100
x
a
b
c
d
200
300
e
Figure 6: Coordinated retinotopy under center-surround stimulation. Top: ON-center; bottom: OFF-center. a. Silicon retina image of a representative center-surround stimulus.
Light or dark intensity denotes relative ON- or OFF-center RGC output rate. b. Spike rates
for ON-center (grey) and OFF-center (black) RGCs in column x of a cross-section of a
representative center-surround stimulus. c. Target layer color maps of RGC growth cones
for sample n = 0. Colors as in Fig. 5. d. Target layer color maps at n = 335. e. Order
parameter evolution for ON-center (grey) and OFF-center (black) RGCs.
ior of nominally identical circuits on the Neurotrope1 chip and the silicon retina. In the
silicon retina, the wide variance of the RGC output rates [7] limits both the convergence
speed and the final topographic level achieved by the spot-driven growth cone population.
Growth cones move faster when stimulated at higher rates, but elevating the mean output
rate of the RGC population allows more excitable RGCs to fire spontaneously at a sustained rate, swamping growth cone-specific guidance signals with stimulus-independent
postsynaptic activity that globally attracts all growth cones. The mean RGC output rate
must remain low enough to suppress these spontaneous distractors, limiting convergence
speed. Variance in the output rates of neighboring RGCs also distorts the shape of the spot
stimulus, eroding the fidelity of the correlation-encoded instructions received by the growth
cones.
Variability in the Neurotrope1 chip further limits topographic convergence. Migrating
growth cones are directed by the local neurotropin landscape, which forms an image of
recent presynaptic activity correlations as filtered through the postsynaptic activation of
the target cell population. This image is distorted by variations between the properties of
individual target cell and neurotropin circuits that are introduced during fabrication. In
particular, poor power distribution on the Neurotrope1 chip creates a systematic gradient
in target cell excitability that warps a growth cone?s impression of the relative coactivity of
its neighbors, attracting it preferentially toward the more excitable target cells on the right
side of the array.
5
Conclusions
In this paper, we demonstrated a completely neuromorphic implementation of retinotopic
self-organization. This is the first time every stage of the process has been implemented
entirely in hardware, from photon transduction through neural map formation. The only
comparable system was described in [11], which processed silicon retina data offline using
a software model of neurotrophic guidance running on a workstation. Our system computes
results in real time at low power, two prerequisites for autonomous mobile applications.
The novel infrastructure developed to implement virtual axon migration allows silicon
growth cones to directly interface with an existing family of AER-compliant devices,
enabling a host of multimodal neuromorphic self-organizing applications. In particular,
the silicon retina?s ability to translate arbitrary visual stimuli into growth cone-compatible
spike trains in real-time opens the door to more ambitious experiments such as using natural
video correlations to automatically wire more complicated visual feature maps.
Our faithful adherence to cellular level details yields an algorithm that is well suited to
physical implementation. In contrast to all previous self-organizing map chips (e.g. [2, 3]),
which implemented a global winner-take-all function to induce competition, our silicon
growth cones compute their own updates using purely local information about the neurotropin gradient, a cellular approach that scales effortlessly to larger populations. Performance might be improved by supplementing our purely morphogenetic model with additional physiologically-inspired mechanisms to prune outliers and consolidate well-placed
growth cones into permanent synapses.
Acknowledgments
We would like to thank J. Arthur for developing a USB system to facilitate data collection.
This project was funded by the David and Lucille Packard Foundation and the NSF/BITS
program (EIA0130822).
References
[1] T. Kohonen (1982), ?Self-organized formation of topologically correct feature maps,? Biol.
Cybernetics, vol. 43, no. 1, pp. 59-69.
[2] W.-C. Fang, B.J. Sheu, O.T.-C. Chen, and J. Choi (1992), ?A VLSI neural processor for image data compression using self-organization networks,? IEEE Trans. Neural Networks, vol. 3,
no. 3, pp. 506-518.
[3] S. Rovetta and R. Zunino (1999), ?Efficient training of neural gas vector quantizers with analog
circuit implementation,? IEEE Trans. Circ. & Sys. II, vol. 46, no. 6, pp. 688-698.
[4] E.W. Dent and F.B. Gertler (2003), ?Cytoskeletal dynamics and transport in growth cone mobility and axon guidance,? Neuron, vol. 40, pp. 209-227.
[5] B. Taba and K. Boahen (2003), ?Topographic map formation by silicon growth cones,? in:
Advances in Neural Information Processing Systems 15 (MIT Press, Cambridge, eds. S. Becker,
S. Thrun, and K. Obermayer), pp. 1163-1170.
[6] S.Y.M. Lam, B.E. Shi, and K.A. Boahen (2005), ?Self-organized cortical map formation by
guiding connections,? Proc. 2005 IEEE Int. Symp. Circ. & Sys., in press.
[7] K.A. Zaghloul and K. Boahen (2004), ?Optic nerve signals in a neuromorphic chip I: Outer and
inner retina models,? IEEE Trans. Bio-Med. Eng., vol. 51, no. 4, pp. 657-666.
[8] K. Boahen (2000), ?Point-to-point connectivity between neuromorphic chips using addressevents,? IEEE Trans. Circ. & Sys. II, vol. 47, pp. 416-434.
[9] K. Miller (1994), ?A model for the development of simple cell receptive fields and the ordered
arrangement of orientation columns through activity-dependent competition between on- and
off-center inputs,? J. Neurosci., vol. 14, no. 1, pp. 409-441.
[10] D. Ringach (2004), ?Haphazard wiring of simple receptive fields and orientation columns in
visual cortex,? J. Neurophys., vol. 92, no. 1, pp. 468-476.
[11] T. Elliott and J. Kramer (2002), ?Coupling an aVLSI neuromorphic vision chip to a neurotrophic model of synaptic plasticity: the development of topography,? Neural Comp., vol. 14,
no. 10, pp. 2353-2370.
| 2773 |@word compression:1 achievable:1 c0:4 open:1 instruction:2 grey:10 simulation:1 eng:1 postsynaptically:2 excited:1 gertler:1 initial:1 contains:1 tuned:1 existing:1 coactive:1 z2:2 neurophys:1 activation:1 must:1 readily:1 plasticity:1 shape:1 displace:1 designed:2 update:12 cue:2 selected:2 half:3 device:1 plane:1 destined:1 sys:3 colored:1 filtered:1 infrastructure:1 coarse:1 node:11 location:3 five:1 mathematical:1 burst:1 sustained:1 symp:1 manner:1 x0:9 upenn:2 roughly:1 themselves:1 examine:1 brain:2 terminal:1 inspired:1 globally:1 decreasing:1 automatically:2 precursor:1 increasing:1 supplementing:1 project:1 retinotopic:9 circuit:11 taba:2 medium:2 remapping:2 adherence:1 pursue:1 developed:2 fabricated:2 temporal:2 every:5 act:1 charge:2 growth:70 laterally:2 control:1 bio:1 organize:5 continually:1 local:6 bind:1 monolithic:2 limit:2 receptor:3 establishing:1 path:1 black:6 might:1 patterned:2 programmed:1 limited:1 directed:2 faithful:1 acknowledgment:1 spontaneously:1 motile:3 implement:2 spot:10 procedure:1 area:2 projection:1 induce:1 regular:1 subpopulation:1 onto:2 twodimensional:1 www:1 map:26 demonstrated:1 center:41 shi:1 layout:1 independently:1 identifying:1 insight:1 rule:1 array:3 fang:1 population:12 coordinate:1 variation:1 autonomous:1 updated:2 limiting:1 target:44 spontaneous:1 decode:1 mosaic:1 pa:1 expensive:1 located:4 updating:1 bottom:2 coincidence:3 ordering:1 highest:1 boahen:7 environment:1 retracting:1 dynamic:2 purely:2 creates:1 swap:1 completely:1 translated:1 multimodal:1 chip:27 various:1 finger:1 surrounding:1 train:8 describe:1 formation:6 refined:2 whose:4 encoded:3 larger:2 constitutive:1 circ:3 elaborates:1 triangular:1 ability:2 topographic:10 asynchronously:1 final:1 sequence:2 transistor:2 lam:1 reset:1 kohonen:2 relevant:1 neighboring:2 organizing:4 translate:1 subcellular:1 intervening:2 description:1 competition:2 convergence:3 cluster:3 transmission:1 extending:1 sea:1 executing:1 converges:1 illustrate:2 coupling:1 avlsi:1 propagating:1 nearest:2 received:1 implemented:3 distilling:1 quantify:1 direction:1 greek:1 correct:1 modifying:1 centered:4 occupies:1 translating:1 virtual:3 explains:1 exchange:1 activating:1 clustered:1 brian:1 biological:1 dendritic:1 neuroengineering:1 extension:1 dent:1 migrating:3 effortlessly:2 around:1 elevating:1 seed:1 mapping:1 driving:1 sought:1 released:2 relay:1 fh:3 diminishes:1 proc:1 spreading:3 currently:1 vice:1 occupying:2 weighted:1 mit:1 mobile:1 voltage:1 endow:1 release:3 dragging:1 indicates:1 mainly:1 visibly:1 contrast:2 dependent:1 entire:2 vlsi:2 originating:1 arg:1 fidelity:1 orientation:2 development:3 spatial:2 field:4 once:2 tow:2 sampling:1 kwabena:1 represents:3 mm2:1 identical:1 unsupervised:1 nearly:1 mimic:1 stimulus:14 roman:1 employ:1 retina:25 inhibited:1 randomly:3 oriented:2 individual:1 fire:4 filopodia:3 organization:8 asc:3 custom:1 mixture:1 light:11 swapping:1 activated:1 bioengineering:1 arthur:1 respective:1 mobility:1 indexed:1 guidance:8 column:4 steer:1 contiguous:1 cover:1 neuromorphic:7 neurotropin:17 applicability:1 lattice:8 introducing:1 entry:7 fabrication:1 reported:1 stored:1 migration:4 density:3 destination:1 off:21 receiving:1 compliant:1 systematic:1 tip:1 analogously:1 quickly:1 connectivity:1 central:1 containing:1 creating:1 photon:2 lookup:11 retinal:5 includes:1 int:1 coordinated:3 permanent:1 register:1 microcontroller:1 complicated:1 contribution:2 variance:2 miller:1 interfaced:1 identify:1 landscape:1 yield:1 annulus:1 comp:1 drive:1 usb:3 cybernetics:1 processor:1 synapsis:1 synaptic:3 neu:1 definition:1 ed:1 against:4 pp:10 conveys:1 transmits:1 workstation:1 sampled:1 radially:1 subsection:2 color:10 distractors:1 tipped:3 organized:4 neurotrophic:2 nerve:1 higher:1 response:1 improved:1 strongly:1 stage:2 until:1 correlation:4 receives:1 transport:1 overlapping:1 facilitate:2 requiring:1 rgcs:18 evolution:4 tagged:1 assigned:3 chemical:2 read:1 spatially:1 excitability:2 seeded:1 ringach:1 white:2 transduces:1 adjacent:6 during:2 self:20 wiring:1 sprouting:1 maintained:1 excitation:1 m:1 prominent:1 impression:1 demonstrate:2 dedicated:1 interface:1 image:5 instantaneous:1 novel:2 possessing:1 stimulation:2 spiking:2 physical:1 tracked:1 khz:3 winner:1 sheu:1 extend:1 analog:1 silicon:27 neurites:1 measurement:1 expressing:1 surround:4 versa:1 cambridge:1 trivially:1 outlined:2 innervate:2 neurite:2 funded:1 interleaf:1 cortex:1 attracting:1 own:5 recent:1 driven:4 reverse:3 route:2 preserving:1 additional:1 impose:1 prune:1 maximize:2 signal:2 ii:2 branch:3 full:1 hebbian:1 instruct:1 adapt:1 faster:1 cross:2 long:1 host:1 heterogeneous:1 vision:1 maxx0:1 kernel:2 achieved:2 cell:44 background:3 remarkably:1 interval:1 completes:1 source:17 appropriately:2 biased:1 rest:1 tsmc:1 swapped:1 induced:3 med:1 mature:1 sent:1 climb:1 capacitor:3 axonal:1 diffusible:4 door:1 split:1 enough:4 pennsylvania:1 displacing:1 identified:1 opposite:1 attracts:1 inner:1 consumed:1 translates:3 zaghloul:1 six:1 becker:1 routed:1 cause:1 oscillate:1 behav:1 covered:1 dark:7 hardware:8 processed:1 generate:1 occupy:4 nsf:1 disjoint:2 popularity:1 per:1 write:1 vol:9 affected:1 four:2 segregating:1 threshold:1 prevent:2 retrograde:1 ram:3 mosis:1 cone:70 sum:1 letter:2 injected:1 distorted:1 topologically:1 place:1 laid:2 extends:1 family:1 patch:2 oscillation:1 consolidate:1 comparable:1 illuminated:1 interleaved:2 layer:37 bound:1 entirely:1 bit:1 activity:10 aer:5 optic:1 constraint:1 software:3 nearby:2 generates:1 tag:1 speed:2 attempting:1 extracellular:2 department:1 developing:2 request:1 poor:2 describes:1 across:1 slightly:1 postsynaptic:5 remain:1 modification:1 maxy:1 outlier:1 restricted:1 equation:2 segregation:1 previously:1 trunk:1 reroute:1 mechanism:1 fed:1 prerequisite:1 robustness:1 original:1 top:2 denotes:2 include:1 running:1 atc:3 eliciting:1 presynaptically:2 move:6 already:1 arrangement:1 spike:22 receptive:4 concentration:1 responds:1 obermayer:1 gradient:5 distance:4 separate:3 link:2 simulated:1 capacity:1 separating:1 ior:1 thank:1 participate:1 evenly:1 thrun:1 presynaptic:5 outer:1 cellular:2 toward:2 index:2 zunino:1 balance:1 preferentially:1 executed:1 quantizers:1 suppress:1 implementation:9 ambitious:1 gated:1 wire:5 neuron:4 observation:2 enabling:1 gas:1 situation:1 communication:2 variability:2 gc:7 varied:1 arbitrary:1 intensity:2 introduced:1 david:1 pair:2 required:1 specified:1 distorts:1 connection:5 tap:1 merges:1 established:1 assembled:1 address:14 trans:4 pattern:1 mismatch:1 reading:1 program:1 packard:1 video:1 power:3 event:4 overlap:1 natural:1 nth:1 brief:1 identifies:1 uptake:4 excitable:4 philadelphia:1 segregated:2 relative:3 embedded:1 fully:1 topography:3 interesting:1 proportional:2 digital:1 foundation:1 integrate:1 illuminating:3 sufficient:1 elliott:1 exciting:1 principle:1 excitatory:2 compatible:1 placed:1 asynchronous:1 offline:1 guide:1 allow:1 side:2 warp:1 neighbor:4 wide:1 distributed:1 cortical:1 computes:2 sensory:1 forward:6 collection:2 refinement:2 projected:5 instructed:1 neurobiological:1 global:1 active:9 excite:1 scrambled:1 physiologically:1 activitydependent:1 table:11 stimulated:1 channel:1 transfer:1 protocol:1 spread:2 neurosci:1 arrival:1 body:12 fig:16 site:3 representative:6 transduction:2 axon:20 precision:1 position:11 guiding:1 winning:1 down:3 minute:1 choi:1 specific:1 decay:1 illumination:4 kx:2 chen:1 suited:1 ganglion:2 neurophysiological:2 visual:3 ordered:1 nominally:1 ligand:1 kramer:1 shared:1 absence:2 content:2 retinotopy:3 uniformly:1 engineer:1 called:4 arbor:6 formally:1 rgc:25 selectively:1 biol:1 correlated:5 |
1,953 | 2,774 | Learning Multiple Related Tasks using Latent
Independent Component Analysis
Jian Zhang?, Zoubin Ghahramani??, Yiming Yang?
? School of Computer Science
? Gatsby Computational Neuroscience Unit
Cargenie Mellon University
University College London
Pittsburgh, PA 15213
London WC1N 3AR, UK
{jian.zhang, zoubin, yiming}@cs.cmu.edu
Abstract
We propose a probabilistic model based on Independent Component
Analysis for learning multiple related tasks. In our model the task parameters are assumed to be generated from independent sources which
account for the relatedness of the tasks. We use Laplace distributions
to model hidden sources which makes it possible to identify the hidden,
independent components instead of just modeling correlations. Furthermore, our model enjoys a sparsity property which makes it both parsimonious and robust. We also propose efficient algorithms for both empirical Bayes method and point estimation. Our experimental results on two
multi-label text classification data sets show that the proposed approach
is promising.
1 Introduction
An important problem in machine learning is how to generalize between multiple related
tasks. This problem has been called ?multi-task learning?, ?learning to learn?, or in some
cases ?predicting multivariate responses?. Multi-task learning has many potential practical
applications. For example, given a newswire story, predicting its subject categories as
well as the regional categories of reported events based on the same input text is such
a problem. Given the mass tandem spectra of a sample protein mixture, identifying the
individual proteins as well as the contained peptides is another example.
Much attention in machine learning research has been placed on how to effectively learn
multiple tasks, and many approaches have been proposed[1][2][3][4][5][6][10][11]. Existing approaches share the basic assumption that tasks are related to each other. Under this
general assumption, it would be beneficial to learn all tasks jointly and borrow information
from each other rather than learn each task independently. Previous approaches can be
roughly summarized based on how the ?relatedness? among tasks is modeled, such as IID
tasks[2], a Bayesian prior over tasks[2][6][11], linear mixing factors[5][10], rotation plus
shrinkage[3] and structured regularization in kernel methods[4].
Like previous approaches, the basic assumption in this paper is that the multiple tasks are
related to each other. Consider the case where there are K tasks and each task is a binary
classification problem from the same input space (e.g., multiple simultaneous classifications of text documents). If we were to separately learn a classifier, with parameters ? k for
each task k, we would be ignoring relevant information from the other classifiers. The assumption that the tasks are related suggests that the ?k for different tasks should be related
to each other. It is therefore natural to consider different statistical models for how the ? k ?s
might be related.
We propose a model for multi-task learning based on Independent Component Analysis
(ICA)[9]. In this model, the parameters ?k for different classifiers are assumed to have
been generated from a sparse linear combination of a small set of basic classifiers. Both the
coefficients of the sparse combination (the factors or sources) and the basic classifiers are
learned from the data. In the multi-task learning context, the relatedness of multiple tasks
can be explained by the fact that they share certain number of hidden, independent components. By controlling the model complexity in terms of those independent components we
are able to achieve better generalization capability. Furthermore, by using distributions like
Laplace we are able to enjoy a sparsity property, which makes the model both parsimonious
and robust in terms of identifying the connections with independent sources. Our model
can be combined with many popular classifiers, and as an indispensable part we present
scalable algorithms for both empirical Bayes method and point estimation, with the later
being able to solve high-dimensional tasks. Finally, being a probabilistic model it is always
convenient to obtain probabilistic scores and confidence which are very helpful in making
statistical decisions. Further discussions on related work are given in Section 5.
2 Latent Independent Component Analysis
The model we propose for solving multiple related tasks, namely the Latent Independent
Component Analysis (LICA) model, is a hierarchical Bayesian model based on the traditional Independent Component Analysis. ICA[9] is a promising technique from signal
processing and designed to solve the blind source separation problem, whose goal is to
extract independent sources given only observed data that are linear combinations of the
unknown sources. ICA has been successfully applied to blind source separation problem
and shows great potential in that area. With the help of non-Gaussianity and higher-order
statistics it can correctly identify the independent sources, as opposed to technique like
Factor Analysis which is only able to remove the correlation in the data due to the intrinsic
Gaussian assumption in the corresponding model.
In order to learn multiple related tasks more effectively, we transform the joint learning
problem into learning a generative probabilistic model for our tasks (or more precisely,
task parameters), which precisely explains the relatedness of multiple tasks through the
latent, independent components. Unlike the standard Independent Component Analysis
where we use observed data to estimate the hidden sources, in LICA the ?observed data?
for ICA are actually task parameters. Consequently, they are latent and themselves need
to be learned from the training data of each individual task. Below we give the precise
definition of the probabilistic model for LICA.
Suppose we use ?1 , ?2 , . . . , ?K to represent the model parameters of K tasks where ?k ?
RF ?1 can be thought as the parameter vector of the k-th individual task. Consider the
following generative model for the K tasks:
?k
sk
ek
= ?sk + ek
? p(sk | ?)
? N (0, ?)
(1)
where sk ? RH?1 are the hidden source models with ? denotes its distribution parameters; ? ? RF ?H is a linear transformation matrix; and the noise vector ek ? RF ?1
Figure 1: Graphical Model for Latent Independent Component Analysis
is usually assumed to be a multivariate Gaussian with diagonal covariance matrix ? =
diag(?11 , . . . , ?F F ) or even ? = ? 2 I. This is essentially assuming that the hidden sources
s are responsible for all the dependencies among ?k ?s, and conditioned on them all ?k ?s are
independent. Generally speaking we can use any member of the exponential families as
p(ek ), but in most situations the noise is taken to be a multivariate Gaussian which is convenient. The graphical model for equation (1) is shown as the upper level in Figure 1,
whose lower part will be described in the following.
2.1 Probabilistic Discriminative Classifiers
One building block in the LICA is the probabilistic model for learning each individual
task, and in this paper we focus on classification tasks. We will use the following notation
to describe a probabilistic discriminative classifier for task k, and for notation simplicity we
omit the task index k below. Suppose we have training data D = {(x 1 , y1 ), . . . , (xN , yN )}
where xi ? RF ?1 is the input data vector and yi ? {0, 1} is the binary class label, our goal
is to seek a probabilistic classifier whose prediction is based on the conditional probability
4
p(y = 1|x) = f (x) ? [0, 1]. We further assume that the discriminative function to have
a linear form f (x) = ?(? T x), which can be easily generalized to non-linear functions by
some feature mapping. The output class label y can be thought as randomly generated from
a Bernoulli distribution with parameter ?(? T x), and the overall model can be summarized
as follows:
yi ? B(?(?T xi ))
Z t
?(t) =
p(z)dz
(2)
??
where B(.) denotes the Bernoulli distribution and p(z) is the probability density function
of some random variable Z. By changing the definition of random variable Z we are able
to specialize the above model into a variety of popular learning methods. For example,
when p(z) is standard logistic distribution we will get logistic regression classifier; when
p(z) is standard Gaussian we get the probit regression. In principle any member belonging
to the above class of classifiers can be plugged in our LICA, or even generative classifiers
like Naive Bayes. We take logistic regression as the basic classifier, and this choice should
not affect the main point in this paper. Also note that it is straightforward to extend the
framework for regression tasks whose likelihood function yi ? N (?T xi , ? 2 ) can be solved
by simple and efficient algorithms. Finally we would like to point out that although shown
in the graphical model that all training instances share the same input vector x, this is
mainly for notation simplicity and there is indeed no such restriction in our model. This is
convenient since in reality we may not be able to obtain all the task responses for the same
training instance.
3 Learning and Inference for LICA
The basic idea of the inference algorithm for the LICA is to iteratively estimate the task
parameters ?k , hidden sources sk , and the mixing matrix ? and noise covariance ?. Here
we present two algorithms, one for the empirical Bayes method, and the other for point
estimation which is more suitable for high-dimensional tasks.
3.1 Empirical Bayes Method
The graphical model shown in Figure 1 is an example of a hierarchical Bayesian model,
where the upper levels of the hierarchy model the relation between the tasks. We can use
an empirical Bayes approach and learn the parameters ? = {?, ?, ?} from the data while
treating the variables Z = {?k , sk }K
k=1 as hidden, random variables. To get around the
unidentifiability caused by the interaction between ? and s we assume ? is of standard
parametric form (e.g. zero mean and unit variance) and thus remove it from ?. The goal
? and ?
? as well as obtain posterior distributions over hidden
is to learn point estimators ?
variables given training data.
The log-likelihood of incomplete data log p(D | ?) 1 can be calculated by integrating out
hidden variables
(Z N
)
Z
K
X
Y
(k)
log p(D|?) =
log
p(yi | xi , ?k )
p(?k | sk , ?, ?)p(sk |?)dsk d?k
i=1
k=1
for which the maximization over parameters ? = {?, ?} involves two complicated integrals over ?k and sk , respectively. Furthermore, for classification tasks the likelihood function p(y|x, ?) is typically non-exponential and thus exact calculation becomes intractable.
However, we can approximate the solution by applying the EM algorithm to decouple it
into a series of simpler E-steps and M-steps as follows:
1. E-step: Given the parameter ?t?1 = {?, ?}t?1 from the (t ? 1)-th step, compute
the distribution of hidden variables given ?t?1 and D: p(Z | ?t?1 , D)
2. M-step: Maximizing the expected log-likelihood of complete data (Z, D), where
the expectation is taken over the distribution of hidden variables obtained in the
E-step: ?t = arg max? Ep(Z|?t?1 ,D) [log p(D, Z | ?)]
The log-likelihood of complete data can be written as
(N
)
K
X
X
(k)
log p(D, Z | ?) =
log p(yi | xi , ?k ) + log p(?k | sk , ?, ?) + log p(sk | ?)
k=1
i=1
where the first and third item do not depend on ?. After some simplification the M-step
? ?}
? = arg max?,? PK E[log p(?k | sk , ?, ?)] which leads to
can be summarized as {?,
k=1
the following updating equations:
! K
!?1
!
K
K
K
X
X
X
X
1
T
T
T
T
T
?=
? =
?
E[sk sk ]
; ?
?
E[?k sk ]
E[?k ?k ] ? (
E[?k sk ])?
K
k=1
k=1
k=1
k=1
In the E-step we need to calculate the posterior distribution p(Z | D, ?) given the parameter ? calculated in previous M-step. Essentially only the first and second order
1
Here with a little abuse of notation we ignore the difference of discriminative and generative at
the classifier level and use p(D | ?k ) to denote the likelihood in general.
Algorithm 1 Variational Bayes for the E-step (subscript k is removed for simplicity)
1. Initialize q(s) with some standard distribution (Laplace distribution in our case):
QH
q(s) = h=1 L(0, 1).
2. Solve the following Bayesian logistic regression (or other Bayesian classifier):
(Z
)
QN
N (?; ?E[s], ?) i=1 p(yi |?, xi )
q(?) ? arg max
q(?) log
d?
q(?)
q(?)
3. Update q(s):
Z
p(s) 1
q(s)?arg max q(s) log
? Tr ??1 (E[??T ]+?ssT ?T ?2E[?](?s)T ) ds
q(s)
q(s) 2
4. Repeat steps 2-5 until convergence conditions are satisfied.
moments are needed, namely: E[?k ], E[sk ], E[?k ?kT ], E[sk sTk ] and E[?k sTk ]. Since exact calculation is intractable we will approximate p(Z | D, ?) with q(Z) belonging to
the exponential family such that certain distance measure (can be asymmetric) between
p(Z|D, ?) and q(Z)) is minimized. In our case we apply the variational Bayes method
which applies KL (q(Z)||p(D, Z | ?)) as the distance measure. The central
idea is to
R
lower bound the log-likelihood using Jensen?s inequality: log p(D) = log p(D, Z)dZ ?
R
q(Z) log p(D,Z)
q(Z) dZ. The RHS of the above equation is what we want to maximize, and
it is straightforward to show that maximizing this lower bound is equivalent to minimize
the KL-divergence KL(q(Z)||p(Z|D)). Since given ? the K tasks are decoupled, we can
conduct inference for each task respectively. We further assume q(? k , sk ) = q(?k )q(sk ),
which in general is a reasonable simplifying assumption and allows us to do the optimization iteratively. The details for the E-step are shown in Algorithm 1.
We would like to comment on several things in Algorithm 1. First, we assume the form of
q(?) to be multivariate Gaussian, which is a reasonable choice especially considering the
fact that only the first and second moments are needed in the M-step. Second, the prior
choice of p(s) in step 3 is significant since for each s we only have one associated ?data
point? ?. In particular using the Laplace distribution will lead to a more sparse solution of
E[s], and this will be made more clear in Section 3.2. Finally, we take the parametric form
of q(s) to be the product of Laplace distributions with unit variance but known mean, where
the fixed variance is intended to remove the unidentifiability issue caused by the interaction
between scales of s and ?. Although using a full covariance Gaussian for q(s) is another
choice, again due to unidentifiability reason caused by rotations of s and ? we could make
it a diagonal Gaussian. As a result, we argue that the product of Laplaces is better than the
product of Gaussians since it has the same parametric form as the prior p(s).
3.1.1 Variational Method for Bayesian Logistic Regression
We present an efficient algorithm based on the variational method proposed in[7] to solve
step 2 in Algorithm 1, which is guaranteed to converge and known to be efficient for this
problem. Given a Gaussian prior N (m0 , V0 ) over the parameter ? and a training set 2
D = {(x1 , y1 ), . . . , (xN , yN )}, we want to obtain an approximation N (m, V) to the true
posterior distribution p(?|D). Taking one data point (x, y) as an example, the basic idea
is to use an exponential function to approximate the non-exponential likelihood function
p(y|x, ?) = (1 + exp(?y? T x))?1 which in turn makes the Bayes formula tractable.
2
Again we omit the task index k and use y ? {?1, 1} instead of y ? {0, 1} to simplify notation.
4
By using the inequality p(y|x, ?) ? g(?) exp (yxT ? ? ?)/2 ? ?(?)((xT ?)2 ? ? 2 ) =
p(y|x, ?, ?) where g(z) = 1/(1 + exp(?z)) is the logistic function
and ?(?) =
R
tanh(?/2)/4?,
we
can
maximize
the
lower
bound
of
p(y|x)
=
p(?)p(y|x,
?)d? ?
R
p(?)p(y|x, ?, ?)d?. An EM algorithm can be formulated by treating ? as the parameter and ? as the hidden variable:
? E-step: Q(?, ? t ) = E [log {p(?)p(y|x, ?, ?)} | x, y, ? t ]
? M-step: ? t+1 = arg max? Q(?, ? t )
Due to the Gaussianity assumption the E-step can be thought as updating the sufficient
statistics (mean and covariance) of q(?). Finally by using the Woodbury formula the EM
iterations can be unraveled and we get the efficient one-shot E-step updating without involving matrix inversion (due to space limitation we skip the derivation):
2?(?)
Vpost = V ?
Vx(Vx)T
1 + 2?(?)c
2?(?)
y
y 2?(?)
mpost = m ?
VxxT m + Vx ?
cVx
1 + 2?(?)c
2
2 1 + 2?(?)c
where c = xT Vx, and ? is calculated first from the M-step which is reduced to find the
fixed point of the following one-dimensional
problem and can be solved efficiently:
2
2?(?)
2?(?)
y
y 2?(?)
2
2
T
? =c?
c + x m?
cxT m + c ?
c2
1 + 2?(?)c
1 + 2?(?)c
2
2 1 + 2?(?)c
And this process will be performed for each data point to get the final approximation q(?).
3.2 Point Estimation
Although the empirical Bayes method is efficient for medium-sized problem, both its computational cost and memory requirement grow as the number of data instances or features
increases. For example, it can easily happen in text or image domain where the number
of features can be more than ten thousand, so we need faster methods. We can obtain the
point estimation of {?k , sk }K
k=1 , by treating it as a limiting case of the previous algorithm.
To be more specific, by letting q(?) and q(s) converging to the Dirac delta function, step
2 in Algorithm 1 can thought as finding the MAP estimation of ? and step 4 becomes the
following lasso-like optimization problem (ms denotes the point estimation of s):
? s = arg min 2||ms ||1 + mTs ?T ??1 ?ms ? 2mTs ?T ??1 E[?]
m
ms
which can be solved numerically. Furthermore, the solution of the above optimization
is sparse in ms . This is a particularly nice property since we would only like to consider
hidden sources for which the association with tasks are significantly supported by evidence.
4 Experimental Results
The LICA model will work most effectively if the tasks we want to learn are very related.
In our experiments we apply the LICA model to multi-label text classification problems,
which are the case for many existing text collections including the most popular ones like
Reuters-21578 and the new RCV1 corpus. Here each individual task is to classify a given
document to a particular category, and it is assumed that the multi-label property implies
that some of the tasks are related through some latent sources (semantic topics).
For Reuters-21578 we choose nine categories out of ninety categories, which is based on
fact that those categories are often correlated by previous studies[8]. After some preprocessing3 we get 3,358 unique features/words, and empirical Bayes method is used to
3
We do stemming, remove stopwords and rare words (words that occur less than three times).
RCV1
Reuters?21578
0.8
Individual
LICA
Individual
LICA
0.75
0.6
0.7
0.65
0.5
Micro?F1
Macro?F1
0.6
0.55
0.4
0.5
0.3
0.45
0.4
0.2
0.35
0.3
50
100
200
500
Training Set Size
0.1
100
200
500
1000
Training Set Size
Figure 2: Multi-label Text Classification Results on Reuters-21578 and RCV1
solve this problem. On the other hand, if we include all the 116 TOPIC categories in RCV1
corpus we get a much larger vocabulary size: 47,236 unique features. Bayesian inference
is intractable for this high-dimensional case since memory requirement itself is O(F 2 ) to
store the full covariance matrix V[?]. As a result we take the point estimation approach
which reduces the memory requirement to O(F ). For both data sets we use the standard
training/test split, but for RCV1 since the test part of corpus is huge (around 800k documents) we only randomly sample 10k as our test set. Since the effectiveness of learning
multiple related tasks jointly should be best demonstrated when we have limited resources,
we evaluate our LICA by varying the size of training set. Each setting is repeated ten times
and the results are summarized in Figure 2.
In Figure 2 the result ?individual? is obtained by using regularized logistic regression for
each category individually. The number of tasks K is equal to 9 and 116 for the Reuters21578 and the RCV1 respectively, and we set H (the dimension of hidden source) to be
the same as K in our experiments. We use F1 measure which is preferred to error rate
in text classification due to the very unbalanced positive/negative document ratio. For the
Reuters-21578 collection we report the Macro-F1 results because this corpus is easier and
thus Micro-F1 are almost the same for both methods. For the RCV1 collection we only
report Micro-F1 due to space limitation, and in fact we observed similar trend in Macro-F1
although values are much lower due to the large number of rare categories. Furthermore,
we achieved a sparse solution for the point estimation method. In particular, we obtained
less than 5 non-zero sources out of 116 for most of the tasks for the RCV1 collection.
5 Discussions on Related Work
By viewing multitask learning as predicting multivariate responses, Breiman and Friedman[3] proposed a method called ?Curds and Whey? for regression problems. The intuition
is to apply shrinkage in a rotated basis instead of the original task basis so that information
can be borrowed among tasks.
By treating tasks as IID generated from some probability space, empirical process theory[2] has been applied to study the bounds and asymptotics of multiple task learning,
similar to the case of standard learning. On the other hand, from the general Bayesian perspective[2][6] we could treat the problem of learning multiple tasks as learning a Bayesian
prior over the task space. Despite the generality of above two principles, it is often necessary to assume some specific structure or parametric form of the task space since the
functional space is usually of higher or infinite dimension compared to the input space.
Our model is related to the recently proposed Semiparametric Latent Factor Model (SLFM)
for regression by Teh et. al.[10]. It uses Gaussian Processes (GP) to model regression
through a latent factor analysis. Besides the difference between FA and ICA, its advantage
is that GP is non-parametric and works on the instance space; the disadvantage of that
model is that training instances need to be shared for all tasks. Furthermore, it is not clear
how to explore different task structures in this instance-space viewpoint. As pointed out
earlier, the exploration of different source models is important in learning related tasks as
the prior often plays a more important role than it does in standard learning.
6 Conclusion and Future Work
In this paper we proposed a probabilistic framework for learning multiple related tasks,
which tries to identify the shared latent independent components that are responsible for
the relatedness among those tasks. We also presented the corresponding empirical Bayes
method as well as point estimation algorithms for learning the model. Using non-Gaussian
distributions for hidden sources makes it possible to identify independent components instead of just decorrelation, and in particular we enjoyed the sparsity by modeling hidden
sources with Laplace distribution. Having the sparsity property makes the model not only
parsimonious but also more robust since the dependence on latent, independent sources will
be shrunk toward zero unless significantly supported by evidence from the data. By learning those related tasks jointly, we are able to get a better estimation of the latent independent
sources and thus achieve a better generalization capability compared to conventional approaches where the learning of each task is done independently. Our experimental results
in multi-label text classification problems show evidence to support our claim.
Our approach assumes that the underlying structure in the task space is a linear subspace,
which can usually capture important information about independent sources. However, it is
possible to achieve better results if we can incorporate specific domain knowledge about the
relatedness of those tasks into the model and obtain a reliable estimation of the structure.
For future research, we would like to consider more flexible source models as well as
incorporate domain specific knowledge to specify and learn the underlying structure.
References
[1] Ando, R. and Zhang, T. A Framework for Learning Predicative Structures from Multiple Tasks
and Unlabeled Data. Technical Rerport RC23462, IBM T.J. Watson Research Center, 2004.
[2] Baxter, J. A Model for Inductive Bias Learning. J. of Artificial Intelligence Research, 2000.
[3] Breiman, L. and Friedman J. Predicting Multivariate Responses in Multiple Linear Regression. J.
Royal Stat. Society B, 59:3-37, 1997.
[4] Evgeniou, T., Micchelli, C. and Pontil, M. Learning Multiple Tasks with Kernel Methods. J. of
Machine Learning Research, 6:615-637, 2005.
[5] Ghosn, J. and Bengio, Y. Bias Learning, Knowledge Sharing. IEEE Transaction on Neural Networks, 14(4):748-765, 2003.
[6] Heskes, T. Empirical Bayes for Learning to Learn. In Proc. of the 17th ICML, 2000.
[7] Jaakkola, T. and Jordan, M. A Variational Approach to Bayesian Logistic Regression Models and
Their Extensions. In Proc. of the Sixth Int. Workshop on AI and Statistics, 1997.
[8] Koller, D. and Sahami, M. Hierarchically Classifying Documents using Very Few Words. In Proc.
of the 14th ICML, 1997.
[9] Roberts, S. and Everson, R. (editors). Independent Component Analysis: Principles and Practice,
Cambridge University Press, 2001.
[10] Teh, Y.-W., Seeger, M. and Jordan, M. Semiparametric Latent Factor Models. In Z. Ghahramani
and R. Cowell, editors, Workshop on Artificial Intelligence and Statistics 10, 2005.
[11] Yu, K., Tresp, V. and Schwaighofer, A. Learning Gaussian Processes from Multiple Tasks. In
Proc. of the 22nd ICML, 2005.
| 2774 |@word multitask:1 inversion:1 nd:1 seek:1 covariance:5 simplifying:1 tr:1 shot:1 moment:2 series:1 score:1 document:5 existing:2 written:1 stemming:1 happen:1 remove:4 designed:1 treating:4 update:1 unidentifiability:3 generative:4 intelligence:2 item:1 simpler:1 zhang:3 stopwords:1 c2:1 specialize:1 indeed:1 expected:1 ica:5 themselves:1 roughly:1 multi:9 little:1 considering:1 tandem:1 becomes:2 notation:5 underlying:2 mass:1 medium:1 what:1 finding:1 transformation:1 classifier:15 uk:1 unit:3 enjoy:1 omit:2 yn:2 positive:1 treat:1 despite:1 subscript:1 abuse:1 might:1 plus:1 suggests:1 limited:1 practical:1 responsible:2 woodbury:1 unique:2 practice:1 block:1 pontil:1 asymptotics:1 area:1 empirical:10 thought:4 significantly:2 convenient:3 confidence:1 integrating:1 word:4 zoubin:2 protein:2 get:8 unlabeled:1 context:1 applying:1 restriction:1 equivalent:1 map:1 demonstrated:1 dz:3 maximizing:2 conventional:1 straightforward:2 attention:1 center:1 independently:2 simplicity:3 identifying:2 estimator:1 borrow:1 laplace:7 limiting:1 controlling:1 suppose:2 hierarchy:1 qh:1 exact:2 play:1 us:1 pa:1 trend:1 particularly:1 updating:3 asymmetric:1 observed:4 ep:1 role:1 solved:3 capture:1 calculate:1 thousand:1 removed:1 intuition:1 complexity:1 depend:1 solving:1 basis:2 easily:2 joint:1 derivation:1 describe:1 london:2 artificial:2 whose:4 larger:1 solve:5 statistic:4 gp:2 jointly:3 transform:1 itself:1 final:1 advantage:1 propose:4 interaction:2 product:3 macro:3 relevant:1 mixing:2 achieve:3 dirac:1 convergence:1 requirement:3 yiming:2 rotated:1 help:1 stat:1 school:1 borrowed:1 c:1 involves:1 skip:1 implies:1 shrunk:1 exploration:1 vx:4 viewing:1 explains:1 f1:7 generalization:2 extension:1 around:2 exp:3 great:1 mapping:1 unraveled:1 claim:1 m0:1 estimation:12 proc:4 label:7 tanh:1 peptide:1 vpost:1 individually:1 reuters21578:1 successfully:1 always:1 gaussian:11 rather:1 shrinkage:2 breiman:2 varying:1 jaakkola:1 focus:1 bernoulli:2 likelihood:8 mainly:1 seeger:1 helpful:1 inference:4 typically:1 hidden:17 relation:1 koller:1 overall:1 classification:9 among:4 arg:6 issue:1 flexible:1 vxxt:1 initialize:1 equal:1 evgeniou:1 having:1 yu:1 icml:3 future:2 minimized:1 report:2 simplify:1 micro:3 few:1 randomly:2 divergence:1 individual:8 intended:1 ando:1 friedman:2 huge:1 mixture:1 wc1n:1 kt:1 integral:1 necessary:1 decoupled:1 unless:1 conduct:1 incomplete:1 plugged:1 instance:6 classify:1 modeling:2 earlier:1 ar:1 disadvantage:1 maximization:1 cost:1 rare:2 reported:1 dependency:1 yxt:1 combined:1 density:1 probabilistic:10 again:2 central:1 satisfied:1 opposed:1 choose:1 ek:4 account:1 potential:2 summarized:4 gaussianity:2 coefficient:1 int:1 caused:3 blind:2 later:1 performed:1 try:1 bayes:13 capability:2 complicated:1 cxt:1 minimize:1 variance:3 efficiently:1 identify:4 generalize:1 bayesian:10 iid:2 simultaneous:1 sharing:1 definition:2 sixth:1 associated:1 popular:3 knowledge:3 actually:1 higher:2 response:4 specify:1 done:1 generality:1 furthermore:6 just:2 correlation:2 d:1 until:1 hand:2 ghosn:1 logistic:8 building:1 true:1 inductive:1 regularization:1 iteratively:2 semantic:1 m:5 generalized:1 complete:2 image:1 variational:5 recently:1 rotation:2 functional:1 mt:2 extend:1 association:1 numerically:1 mellon:1 significant:1 cambridge:1 ai:1 enjoyed:1 heskes:1 pointed:1 newswire:1 v0:1 multivariate:6 posterior:3 perspective:1 indispensable:1 certain:2 store:1 inequality:2 binary:2 watson:1 yi:6 converge:1 maximize:2 signal:1 multiple:18 full:2 reduces:1 technical:1 faster:1 calculation:2 converging:1 prediction:1 basic:7 scalable:1 regression:12 essentially:2 cmu:1 expectation:1 involving:1 iteration:1 kernel:2 represent:1 whey:1 achieved:1 want:3 separately:1 semiparametric:2 grow:1 jian:2 source:24 regional:1 unlike:1 comment:1 subject:1 thing:1 member:2 predicative:1 effectiveness:1 jordan:2 yang:1 split:1 bengio:1 baxter:1 variety:1 affect:1 lasso:1 idea:3 speaking:1 nine:1 dsk:1 generally:1 clear:2 sst:1 ten:2 category:9 reduced:1 neuroscience:1 delta:1 correctly:1 curd:1 changing:1 family:2 reasonable:2 almost:1 separation:2 parsimonious:3 cvx:1 decision:1 bound:4 guaranteed:1 simplification:1 occur:1 precisely:2 min:1 rcv1:8 structured:1 combination:3 belonging:2 beneficial:1 em:3 ninety:1 making:1 explained:1 taken:2 equation:3 resource:1 slfm:1 turn:1 needed:2 sahami:1 letting:1 tractable:1 gaussians:1 everson:1 apply:3 hierarchical:2 original:1 denotes:3 assumes:1 include:1 graphical:4 ghahramani:2 especially:1 society:1 micchelli:1 parametric:5 fa:1 dependence:1 traditional:1 diagonal:2 subspace:1 distance:2 topic:2 argue:1 reason:1 toward:1 assuming:1 besides:1 modeled:1 index:2 ratio:1 robert:1 negative:1 unknown:1 teh:2 upper:2 situation:1 precise:1 y1:2 namely:2 kl:3 connection:1 learned:2 able:7 below:2 usually:3 sparsity:4 rf:4 max:5 memory:3 including:1 reliable:1 royal:1 event:1 suitable:1 natural:1 decorrelation:1 regularized:1 predicting:4 extract:1 naive:1 tresp:1 text:9 prior:6 nice:1 probit:1 limitation:2 sufficient:1 principle:3 viewpoint:1 story:1 editor:2 classifying:1 share:3 ibm:1 placed:1 repeat:1 supported:2 enjoys:1 bias:2 taking:1 sparse:5 calculated:3 xn:2 vocabulary:1 dimension:2 qn:1 made:1 collection:4 transaction:1 approximate:3 ignore:1 relatedness:6 preferred:1 corpus:4 pittsburgh:1 assumed:4 discriminative:4 xi:6 spectrum:1 latent:13 sk:21 reality:1 promising:2 learn:11 robust:3 ignoring:1 domain:3 diag:1 pk:1 main:1 hierarchically:1 rh:2 reuters:5 noise:3 repeated:1 x1:1 gatsby:1 exponential:5 third:1 formula:2 xt:2 specific:4 cargenie:1 jensen:1 evidence:3 intrinsic:1 intractable:3 workshop:2 effectively:3 conditioned:1 easier:1 explore:1 contained:1 schwaighofer:1 applies:1 cowell:1 conditional:1 goal:3 formulated:1 sized:1 consequently:1 stk:2 shared:2 infinite:1 decouple:1 called:2 experimental:3 college:1 support:1 unbalanced:1 incorporate:2 evaluate:1 correlated:1 |
1,954 | 2,775 | Data-Driven Online to Batch Conversions
Ofer Dekel and Yoram Singer
School of Computer Science and Engineering
The Hebrew University, Jerusalem 91904, Israel
{oferd,singer}@cs.huji.ac.il
Abstract
Online learning algorithms are typically fast, memory efficient, and simple to implement. However, many common learning problems fit more
naturally in the batch learning setting. The power of online learning
algorithms can be exploited in batch settings by using online-to-batch
conversions techniques which build a new batch algorithm from an existing online algorithm. We first give a unified overview of three existing online-to-batch conversion techniques which do not use training data
in the conversion process. We then build upon these data-independent
conversions to derive and analyze data-driven conversions. Our conversions find hypotheses with a small risk by explicitly minimizing datadependent generalization bounds. We experimentally demonstrate the
usefulness of our approach and in particular show that the data-driven
conversions consistently outperform the data-independent conversions.
1 Introduction
Batch learning is probably the most common supervised machine-learning setting. In the
batch setting, instances are drawn from a domain X and are associated with target values
from a target set Y. The learning algorithm is given a training set of examples, where each
example is an instance-target pair, and attempts to identify an underlying rule that can be
used to predict the target values of new unseen examples. In other words, we would like
the algorithm to generalize from the training set to the entire domain of examples. The
target space Y can be either discrete, as in the case of classification, or continuous, as in
the case of regression. Concretely, the learning algorithm is confined to a predetermined
set of candidate hypotheses H, where each hypothesis h ? H is a mapping from X to
Y, and the algorithm must select a ?good? hypothesis from H. The quality of different
hypotheses in H is evaluated with respect to a loss function ?, where ?(y, y ? ) is interpreted
as the penalty for predicting the target value y ? when the correct target is y. Therefore,
?(y, h(x)) indicates how well hypothesis h performs with respect to the example (x, y).
When Y is a discrete set, we often use the 0-1 loss, defined by ?(y, y ? ) = 1y6=y? . We also
assume that there exists a probability distribution D over the product space X ? Y, and
that the training set was sampled i.i.d. from this distribution. Moreover, the existence of D
enables us to reason about the average performance of an hypothesis over its entire domain.
Formally, the risk of an hypothesis h is defined to be,
RiskD (h) = E(x,y)?D [?(y, h(x))] .
(1)
The goal of a batch learning algorithm is to use the training set to find a hypothesis that
does well on average, or more formally, to find h ? H with a small risk.
In contrast to the batch learning setting, online learning takes place in a sequence of rounds.
On any given round, t, the learning algorithm receives a single instance xt ? X and predicts
its target value using an hypothesis ht?1 , which was generated on the previous round. On
the first round, the algorithm uses a default hypothesis h0 . Immediately after the prediction
is made, the correct target value yt is revealed and the algorithm suffers an instantaneous
loss of ?(yt , ht?1 (xt )). Finally, the online algorithm may use the newly obtained example
(xt , yt ) to improve its prediction strategy, namely to replace ht?1 with a new hypothesis
ht . Alternatively, the algorithm may choose to stick with its current hypothesis and sets
ht = ht?1 . An online algorithm is therefore defined by its default hypothesis h0 and the
update rule it uses to define new hypotheses. The cumulative loss suffered on a sequence
of rounds is the sum of instantaneous losses suffered on each one of the rounds in the
sequence. In the online setting there is typically no need for any statistical assumptions
since there is no notion of generalization. The goal of the online algorithm is simply to
suffer a small cumulative loss on the sequence of examples it is given, and examples that
are not in this sequence are entirely irrelevant.
Throughout this paper, we assume that we have access to a good online learning algorithm
A for the task on hand. Moreover, A is computationally efficient and easy to implement.
However, the learning problem we face fits much more naturally within the batch learning
setting. We would like to develop a batch algorithm B that exhibits the desirable characteristics of A but also has good generalization properties. A simple and powerful way to
achieve this is to use an online-to-batch conversion technique. This is a general name for
any technique which uses A as a building block in the construction of B. Several different online-to-batch conversion techniques have been developed over the years. Littlestone
and Warmuth [11] introduced an explicit relation between compression and learnability,
which immediately lent itself to a conversion technique for classification algorithms. Gallant [7] presented the Pocket algorithm, a conversion of Rosenblatt?s online Perceptron to
the batch setting. Littlestone [10] presented the Cross-Validation conversion which was
further developed by Cesa-Bianchi, Conconi and Gentile [2]. All of these techniques begin
by presenting the training set (x1 , y1 ), . . . , (xm , ym ) to A in some arbitrary order. As A
performs the m online rounds, it generates a sequence of online hypotheses which it uses to
make predictions on each round. This sequence includes the default hypothesis h0 and the
m hypotheses h1 , . . . , hm generated by the update rule. The aforementioned techniques all
share a common property: they all choose h, the output of the batch algorithm B, to be one
of the online hypotheses h0 , . . . , hm .
In this paper, we focus on a second family of conversions, which evolved somewhat later
and is due to the work of Helmbold and Warmuth [8], Freund and Schapire [6] and CesaBianchi, Conconi and Gentile [2]. The conversion strategies in this family also begin by
using A to generate the sequence of online hypotheses. However, instead of relying on
a single hypothesis from the sequence, they set h to be some combination of the entire
sequence. Another characteristic shared by these three conversions is that the training data
does not play a part in determining how the online hypotheses are combined. That is, the
training data is not used in any way other than to generate the sequence h0 , . . . , hm . In
this sense, these conversion techniques are data-independent. In this paper, we build on the
foundations of these data-independent conversions, and define conversion techniques that
explicitly use the training data to derive the batch algorithm from the online algorithm. By
doing so, we effectively define the data-driven counterparts of the algorithms in [8, 6, 2].
This paper is organized as follows. In Sec. 2 we review the data-independent conversion
techniques from [8, 6, 2] and give a simple unified analysis for all three conversions. At the
same time, we present a general framework which serves as a building-block for our datadriven conversions. Then, in Sec. 3, we derive three special cases of the general framework
and demonstrate some useful properties of the data-driven conversions. Finally, in Sec. 4,
we compare the different conversion techniques on several benchmark datasets and show
that our data-driven approach outperforms the existing data-independent approach.
2 Voting, Averaging, and Sampling
The first conversion we discuss is the voting conversion [6], which applies to problems
where the target space Y is discrete (and relatively small), such as classification problems.
The conversion presents the training set (x1 , y1 ), . . . , (xm , ym ) to the online algorithm A,
which generates the sequence of online hypotheses, h0 , . . . , hm . The conversion then outputs the hypothesis hV , which is defined as follows: given an input x ? X , each online
hypothesis casts a vote of hi (x) and then hV outputs the target value that receives the highest number of votes. For simplicity, assume that ties are broken arbitrarily. The second
conversion is the averaging conversion [2] which applies to problems where Y is a convex
set. For example, this conversion is applicable to margin-based online classifiers or to regression problems where, in both cases, Y = R. This conversion also begins
by using A to
1 Pm
generate h0 , . . . , hm . Then the batch hypothesis hA is defined to be m+1
i=0 hi (x). The
third and last conversion discussed here is the sampling conversion [8]. This conversion is
the most general and applicable to any learning problem, however this generality comes at
a price. The resulting hypothesis, hS , is a stochastic function and not a deterministic one.
In other words, if applied twice to the same instance, hS may output different target values.
Again, this conversion begins by applying A to the training set and obtaining the sequence
of online hypotheses. Every time hS is evaluated, it randomly selects one of h0 , . . . , hm and
uses it to make the prediction. Since hS is a stochastic function, the definition of RiskD (hS )
changes slightly and expectation in Eq. (1) is taken also over the random function hS .
Simple data-dependent bounds on the risk of hV , hA and hS can be derived, and these
bounds are special cases of the more general analysis given below. We now describe a
simple generalization of these three conversion techniques. It is reasonable to assume that
some of the online hypotheses generated by A are better than others. For instance, the
default hypothesis h0 is determined without observing even a single training example. This
surfaces the question whether it is possible to isolate the ?best? online hypotheses and only
use them to define the batch hypothesis. Formally, let [m] denote the set {0, . . . , m} and
let I be some non-empty subset of [m]. Now define hVI (x) to be the hypothesis which
performs voting as described above, with the single difference that only
P the members of
{hi : i ? I} participate in the vote. Similarly, define hAI (x) = (1/|I|) i?I hi (x), and let
hSI be the stochastic function that randomly chooses a function from the set {hi : i ? I}
every time it is evaluated, and predicts according to it. The data-independent conversions
presented in the beginning of this section are obtained by setting I = [m]. Our idea is to
use the training data to find a set I which induces the batch hypotheses hVI , hAI , and hSI with
the smallest risk.
Since there is an exponential number of potential subsets of [m], we need to restrict ourselves to a smaller set of candidate sets. Formally, let I be a family of subsets of [m], and
we restrict our search for I to the family I. Following in the footsteps of [2], we make the
simplifying assumption that none of the sets in I include the largest index m. This is a
technical assumption which can be relaxed at the price of a slightly less elegant analysis.
We use two intuitive concepts
to guide our search for I. First, for any set J ? [m ? 1],
P
define L(J) = (1/|J|) j?J ?(yj+1 , hj (xj+1 )). L(J) is the empirical evaluation of the
loss of the hypotheses indexed by J. We would like to find a set J for which L(J) is small
since we expect that good empirical loss of the online hypotheses indicates a low risk of
the batch hypothesis. Second, we would like |J| to be large so that the presence of a few
bad online hypotheses in J will not have a devastating effect on the performance of the
batch hypothesis. The trade-off between these two competing concepts can be formalized
as follows. Let C be a non-negative constant and define,
1
?(J) = L(J) + C |J|? 2 .
(2)
The function ? decreases as the average empirical loss L(J) decreases, and also as |J|
increases. It therefore captures the intuition described above. The function ? serves as our
yardstick when evaluating the candidates in I. Specifically, we set I = arg minJ?I ?(J).
Below we formally justify our choice of ?, and specifically show that ?(J) is a rather tight
upper bound on the risk of hAJ , hVJ and hSJ . The first lemma relates the risk of these functions
with the average risk of the hypotheses indexed by J.
Lemma 1. Let (x1 , y1 ), . . . , (xm , ym ) be a sequence of examples which is presented to the
online algorithm A and let h0 , . . . , hm be the resulting sequence of online hypotheses. Let
J be a non-empty subset of [m ? 1] andPlet ? : Y ? Y ? R+ be a loss function. (1) If ? is
the 0-1 loss then RiskD (hVJ ) ? (2/|J|)P i?J RiskD (hi (x)). (2) If ? is convex in its second
argument then RiskD (hAJ ) ? (1/|J|)
i?J RiskD (hi (x)). (3) For any loss function ? it
P
holds that RiskD (hSJ ) = (1/|J|) i?J RiskD (hi (x)).
Proof. Beginning with the voting conversion, recall that the loss function being used is the
0-1 loss, namely there is a single correct prediction which incurs a loss of 0 and every other
prediction incurs a loss of 1. For any example (x, y), if more than half of the hypotheses
in {hi }i?J predict the correct outcome then clearly hVJ also predicts this outcome and
?(y, hVJ (x)) = 0. Therefore, if ?(y, hVJ (x)) = 1 P
then at least half of the hypotheses in
{hi }i?J make incorrect predictions and (|J|/2) ? i?J ?(y, hi (x)). We therefore get,
2 X
?(y, hVJ (x)) ?
?(y, hi (x)) .
|J|
i?J
The above holds for any example (x, y) and therefore also holds after taking expectations
on both sides of the inequality. The bound now follows from the linearity of expectation
and the definition of the risk function in Eq. (1).
Moving on to the second claim of the lemma, we assume that ? is convex in its second
argument. The claim now follows from a direct application of Jensen?s inequality.
Finally, hSJ chooses its outcome by randomly choosing an hypothesis in {hi : i ? J},
where the probability of choosing each hypothesis in this set equals
(1/|J|). Therefore, the
P
expected loss suffered by hSJ on an example (x, y) is (1/|J|) i?J ?(y, hi (x)). The risk of
hSJ is simply the expected value of this term with respect to the random selection of (x, y).
Again using the linearity of expectation, we obtain the third claim of the lemma.
The next lemma relates the average risk of the hypotheses indexed by J with the empirical
performance of these hypotheses, L(J). In the following lemma, we use capital letters to
emphasize that we are dealing with random variables.
Lemma 2. Let (X1 , Y1 ), . . . , (Xm , Ym ) be a sequence of examples independently sampled according to D. Let, H0 , . . . , Hm be the sequence of online hypotheses generated by
A while observing this sequence of examples. Assume that the loss function ? is upperbounded by R. Then for any J ? [m ? 1],
"
#
1 X
C2
RiskD (Hi ) > ?(J) < exp ? 2
,
Pr
|J|
2R
i?J
where C is the constant used in the definition of ? (Eq. (2)).
The proof of this lemma is a direct application of Azuma?s bound on the concentration of
Lipschitz martingales [1], and is identical to that of Proposition 1 in [2]. For concreteness,
we now focus on the averaging conversion and note that the analyses of the other two
conversion strategies are virtually identical. By combining the first claim of Lemma 1 with
A
Lemma 2, we get that for any
J ? I it holds that RiskD (hJ ) ?A ?(J) with probability at
2
2
least 1 ? exp ?C /(2R ) . Using the union bound, RiskD (hJ ) ? ?(J) for all J ? I
simultaneously with probability at least,
C2
.
1 ? |I| exp ? 2
2R
The greater the value of C, the more ? is influenced by the term |J|. On the other hand,
a large value of C increases the probability that ? indeed upper bounds RiskD (hAJ ) for all
J ? I. In conclusion, we have theoretically justified our choice of ? in Eq. (2).
3 Concrete Data-Driven Conversions
In this section we build on the ideas of the previous section and derive three concrete datadriven conversion techniques.
Suffix Conversion: An intuitive argument against selecting I = [m], as done by the dataindependent conversions, is that many online algorithms tend to generate bad hypotheses
during the first few rounds of learning. As previously noted, the default hypothesis h0 is
determined without observing any training data, and we should expect the first few online
hypotheses to be inferior to those that are generated further along. This argument motivates
us to consider subsets J of the form {a, a + 1, . . . , m ? 1}, where a is a positive integer
less than or equal to m ? 1. Li [9] proposed this idea in the context of the voting conversion
and gave a heuristic criterion for choosing a. Our formal setting gives a different criterion
for choosing a. In this conversion we define I to be the set of all suffixes of [m ? 1]. After
the algorithm generates h0 , . . . , hm , we set I to be I = arg minJ?I ?(J).
Interval
Conversion: Kernel-based hypotheses are functions that take the form, h(x) =
Pn
?
K(zj , x), where K is a Mercer kernel, z1 , . . . , zn are instances, often referred
j
j=1
to as support patterns and ?1 , . . . , ?n are real weights. A variety of different batch algorithms produce kernel-based hypotheses, including the Support Vector Machine [12]. An
important learning problem, which is currently addressed by only a handful of algorithms,
is to learn a kernel-based hypothesis h which is defined by at most B support patterns. The
parameter B is a predefined constant often referred to as the budget of support patterns.
Naturally, kernel-based hypotheses which are represented by a few support patterns are
memory efficient and faster to calculate. A similar problem arises in the online learning
setting where the goal is to construct online algorithms where each online hypothesis hi is
a kernel-based function defined by at most B vectors. Several online algorithms have been
proposed for this problem [4, 13, 5]. First note that the data-independent conversions, with
I = [m], are inadequate for this setting. Although each individual online hypothesis is
defined by at most B vectors, hA is defined by the union of these sets, which can be much
larger than B.
To convert a budget-constrained online algorithm A into a budget-constrained batch algorithm, we make an additional assumption on the update strategy employed by A. We
assume that whenever A updates its online hypothesis, it adds a single new support pattern
into the set used to represent the kernel hypothesis, and possibly removes some other pattern from this set. The algorithms in [4, 13, 5] all fall into this category. Therefore, if we
choose I to be the set {a, a + 1, . . . , b} for some integers 0 ? a < b < m, and A updates
its hypothesis k times during rounds a + 1 through b, then hAI is defined by at most B + k
support patterns. Concretely, define I to be the set of all non-empty intervals in [m ? 1].
With C set properly, ?(J) bounds RiskD (hAJ ) for every J ? I with high probability. Next,
J0,7
z
z
J0,3
J0,1
z }| {
h0 h1
}|
J2,3
{
z }| {
h2 h3
}|
J4,7
z
J4,5
z }| {
h4 h5
}|
J6,7
{
{
z }| {
h6 h7
J8,11
z
J8,9
z }| {
h8 h9
}|
J10,11
{
z }| {
h10 h11
h12 . . .
Figure 1: An illustration of the tree-based conversion.
generate h0 , . . . , hm by running A with a budget parameter of B/2. Finally, choose I to
be the set in I which contains at most B/2 updates and also minimizes the ? function. By
construction, the resulting hypothesis, hAI , is defined using at most B support patterns.
Tree-Based Conversion: A drawback of the suffix conversions is that it must be performed in two consecutive stages. First h0 , . . . , hm are generated and stored in memory.
Only then can we calculate ?(J) for every J ? I and perform the conversion. Therefore,
the memory requirements of this conversions grow linearly with m. We now present a
conversion that can sidestep this problem by interleaving the conversion with the online
hypothesis generation. This conversion slightly deviates from the general framework described in the previous section: instead of predefining a set of candidates I, we construct
the optimal subset I in a recursive manner. As a consequence, the analysis in the previous
section does not directly provide a generalization bound for this conversion. Assume for a
moment that m is a power of 2. For all 0 ? a ? m ? 1 define Ja,a = {a}. Now, assume
that we have already constructed the sets Ja,b and Jc,d , where a, b, c, d are integers such
that a < d, b = (a + d ? 1)/2, and c = b + 1. Given these sets, define Ja,d as follows:
(
Ja,b
if ?(Ja,b ) ? ?(Jc,d ) ? ?(Ja,b ) ? ?(Ja,b ? Jc,d )
Jc,d
if ?(Jc,d ) ? ?(Ja,b ) ? ?(Jc,d ) ? ?(Ja,b ? Jc,d ) . (3)
Ja,d =
Ja,b ? Jc,d otherwise
Finally, define I = J0,m?1 and output the batch hypothesis hAI . An illustration of this
process is given in Fig. 1. Note that the definition of I requires only m ? 1 recursive
evaluations of Eq. (3). When m is not a power of 2, we can pad the sequence of online
hypotheses with virtual hypotheses, each of which attains an infinite loss. This conversion
can be performed in parallel with the online rounds since on round t we already have all of
the information required to calculate Ja,b for all b < t.
In the special case where the instances are vectors in Rn , h0 , . . . , hm are linear hypotheses
and we use the averaging technique, the implementation of the tree-based conversion becomes memory efficient. Specifically, assume that each hi takes the form hi (x) = wi ? x
where wi is a vector of weights in Rn . In this case, storing an onlinePhypothesis hi is
equivalent to storing its weight vector wi . For any J ? [m ? 1], storing j?J hj requires
P
storing the single n-dimensional vector j?J wj . Hence, once we calculate Ja,b we can
discard the original online hypotheses ha , . . . , hb and instead merely keep hAJa,b . Moreover,
in order to calculate ? we do not need to keep the set Ja,b itself but rather the values L(Ja,b )
and |Ja,b |. Overall, storing hAJa,b , L(Ja,b ), and |Ja,b | requires only a constant amount of
memory. It can be verified using an inductive argument that the overall memory utilization
of this conversion is O(log(m)), which is significantly less than the O(m) space required
by the suffix conversion.
4 Experiments
We now turn to an empirical evaluation of the averaging and voting conversions. We
chose multiclass classification as the underlying task and used the multiclass version of
MNIST LETTER
3-fold
4-fold
5-fold
6-fold
7-fold
8-fold
9-fold
10-fold
S
S
S
S
S
2
0
?2
1
0
ISOLET
USPS
?1
1
0
?1
4
0
?4
S
I
T
S
I
T
S
I
T
I
T
I
T
I
T
I
T
I
T
Figure 2: Comparison of the three data-driven averaging conversions with the dataindependent averaging conversion, for different datasets (Y-axis) and different training-set
sizes (X-axis). Each bar shows the difference between the error percentages of a datadriven conversion (suffix (S), interval (I) or tree-based (T)) and of the data-independent
conversion. Error bars show standard deviation over the k folds.
the Passive-Aggressive (PA) algorithm [3] as the online algorithm. The PA algorithm is a
kernel-based large-margin online classifier. To apply the voting conversion, Y should be a
finite set. Indeed, in multiclass categorization problems the set Y consists of all possible
labels. To apply the averaging conversion Y must be a convex set. To achieve this, we use
the fact that PA associates a margin value with each class, and define Y = Rs (where s is
the number of classes).
In our experiments, we used the datasets LETTER, MNIST, USPS (training set only), and
ISOLET. These datasets are of size 20000, 70000, 7291 and 7797 respectively. MNIST and
USPS both contain images of handwritten digits and thus induce 10-class problems. The
other datasets contain images (LETTER) and utterances (ISOLET) of the English alphabet.
We did not use the standard splits into training set and test set and instead performed crossvalidation in all of our experiments. For various values of k, we split each dataset into k
parts, trained each algorithm using each of these parts and tested on the k ? 1 remaining
parts. Specifically, we ran this experiment for k = 3, . . . , 10. The reason for doing this
is that the experiment is most interesting when the training sets are small and the learning
task becomes difficult.
We applied the data-independent averaging and voting conversions, as well as the three
data-driven variants of these conversions (6 data-driven conversions in all). The interval
conversion was set to choose an interval containing 500 updates. The parameter C was
arbitrarily set to 3. Additionally, we evaluated the test error of the last hypothesis generated by the online algorithm, hm . It is common malpractice amongst practitioners to use
hm as if it were a batch hypothesis, instead of using an online-to-batch conversion. As a
byproduct of our experiments, we show that hm performs significantly worse than any of
the conversion techniques discussed in this paper. The kernel used in all of the experiments
is the Gaussian kernel with default kernel parameters. We would like to emphasize that our
goal was not to achieve state-of-the-art results on these datasets but rather to compare the
different conversion strategies on the same sequence of hypotheses. To achieve the best
results, one would have to tune C and the various kernel parameters.
The results for the different variants of the averaging conversion are depicted in Fig. 2.
LETTER 5-fold
LETTER 10-fold
MNIST 5-fold
MNIST 10-fold
USPS 5-fold
USPS 10-fold
ISOLET 5-fold
ISOLET 10-fold
last
29.9 ? 1.8
37.3 ? 2.1
7.2 ? 0.5
13.8 ? 2.3
9.7 ? 1.0
12.7 ? 4.7
20.1 ? 3.8
28.6 ? 3.6
average
21.2 ? 0.5
26.9 ? 0.7
5.9 ? 0.4
9.5 ? 0.8
7.5 ? 0.4
10.1 ? 0.7
17.6 ? 4.1
25.8 ? 2.8
average-sfx
20.5 ? 0.6
26.5 ? 0.6
5.3 ? 0.6
9.1 ? 0.8
7.1 ? 0.4
9.5 ? 0.8
16.7 ? 3.3
22.7 ? 3.3
voting
23.4 ? 0.8
30.2 ? 1.0
7.0 ? 0.5
8.7 ? 0.5
9.4 ? 0.4
12.5 ? 1.0
20.6 ? 3.4
29.3 ? 3.1
voting-sfx
21.5 ? 0.8
27.9 ? 0.6
6.5 ? 0.5
8.0 ? 0.5
8.8 ? 0.3
11.3 ? 0.6
18.3 ? 3.9
26.7 ? 4.0
Table 1: Percent of errors averaged over the k folds with standard deviation. Results are
given for the last online hypothesis (hm ), the data-independent averaging and voting conversions, and their suffix variants. The lowest error on each row is shown in bold.
For each dataset and each training-set size, we present a bar-plot which represents by how
much each of the data-driven averaging conversions improves over the data-independent
averaging conversion. For instance, the left bar in each plot shows the difference between
the test errors of the suffix conversion and the data-independent conversion. A negative
value means that the data-driven technique outperforms the data-independent one. The
results clearly indicate that the suffix and tree-based conversions consistently improve over
the data-independent conversion. The interval conversion does not improve as much and
occasionally even looses to the data-independent conversion. However, this is a small price
to pay in situations where it is important to generate a compact kernel-based hypothesis.
Due to the lack of space, we omit a similar figure for the voting conversion and merely note
that the plots are very similar to the ones in Fig. 2.
In Table 1 we give some concrete values of test error, and compare data-independent and
data-driven versions of averaging and voting, using the suffix conversion. As a reference,
we also give the results obtained by the last hypothesis generated by the online algorithm.
In all of the experiments, the data-driven conversion outperforms the data-independent conversion. In general, averaging exhibits better results than voting, while the last online hypothesis is almost always inferior to all of the online-to-batch conversions.
References
[1] K. Azuma. Weighted sums of certain dependent random variables. Tohoku Mathematical Journal, 68:357?367, 1967.
[2] N. Cesa-Bianchi, A. Conconi, and C.Gentile. On the generalization ability of on-line learning
algorithms. IEEE Transactions on Information Theory, 2004.
[3] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive aggressive
algorithms. Journal of Machine Learning Research, 2006.
[4] K. Crammer, J. Kandola, and Y. Singer. Online classification on a budget. NIPS 16, 2003.
[5] O. Dekel, S. Shalev-Shwartz, and Y. Singer. The Forgetron: A kernel-based perceptron on a
fixed budget. NIPS 18, 2005.
[6] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm.
Machine Learning, 37(3):277?296, 1999.
[7] S. I. Gallant. Optimal linear discriminants. ICPR 8, pages 849?852. IEEE, 1986.
[8] D. P. Helmbold and M. K. Warmuth. On weak learning. Journal of Computer and System
Sciences, 50:551?573, 1995.
[9] Y. Li. Selective voting for perceptron-like on-line learning. In ICML 17, 2000.
[10] N. Littlestone. From on-line to batch learning. COLT 2, pages 269?284, July 1989.
[11] N. Littlestone and M. Warmuth. Relating data compression and learnability. Unpublished
manuscript, November 1986.
[12] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.
[13] J. Weston, A. Bordes, and L. Bottou. Online (and offline) on a tighter budget. AISTAT 10, 2005.
| 2775 |@word h:7 version:2 compression:2 dekel:3 r:1 simplifying:1 incurs:2 moment:1 contains:1 selecting:1 outperforms:3 existing:3 current:1 must:3 predetermined:1 enables:1 remove:1 plot:3 update:7 half:2 warmuth:4 beginning:2 mathematical:1 along:1 c2:2 direct:2 h4:1 constructed:1 incorrect:1 consists:1 manner:1 theoretically:1 datadriven:3 indeed:2 expected:2 relying:1 becomes:2 begin:4 underlying:2 moreover:3 linearity:2 lowest:1 israel:1 evolved:1 interpreted:1 minimizes:1 developed:2 unified:2 loos:1 every:5 voting:15 tie:1 classifier:2 stick:1 utilization:1 omit:1 positive:1 engineering:1 consequence:1 chose:1 twice:1 averaged:1 yj:1 union:2 block:2 implement:2 recursive:2 digit:1 j0:4 empirical:5 significantly:2 word:2 induce:1 get:2 selection:1 risk:12 applying:1 context:1 h8:1 equivalent:1 deterministic:1 yt:3 jerusalem:1 independently:1 convex:4 simplicity:1 formalized:1 immediately:2 helmbold:2 rule:3 isolet:5 notion:1 target:12 construction:2 play:1 sfx:2 us:5 hypothesis:74 pa:3 associate:1 predicts:3 hv:3 capture:1 calculate:5 wj:1 trade:1 highest:1 decrease:2 ran:1 intuition:1 broken:1 trained:1 tight:1 upon:1 usps:5 represented:1 various:2 alphabet:1 fast:1 describe:1 outcome:3 h0:17 choosing:4 shalev:2 heuristic:1 larger:1 otherwise:1 ability:1 unseen:1 itself:2 online:57 sequence:20 product:1 j2:1 combining:1 achieve:4 intuitive:2 aistat:1 crossvalidation:1 empty:3 requirement:1 produce:1 categorization:1 derive:4 develop:1 ac:1 h3:1 school:1 eq:5 c:1 come:1 indicate:1 drawback:1 correct:4 stochastic:3 virtual:1 ja:18 generalization:6 h11:1 proposition:1 tighter:1 hold:4 exp:3 mapping:1 predict:2 claim:4 j10:1 hvi:2 smallest:1 consecutive:1 applicable:2 label:1 currently:1 largest:1 weighted:1 clearly:2 gaussian:1 always:1 rather:3 pn:1 hj:4 derived:1 focus:2 properly:1 consistently:2 indicates:2 contrast:1 attains:1 sense:1 dependent:2 suffix:9 typically:2 entire:3 predefining:1 pad:1 relation:1 footstep:1 h10:1 selective:1 selects:1 arg:2 classification:6 aforementioned:1 overall:2 colt:1 constrained:2 special:3 art:1 equal:2 construct:2 once:1 sampling:2 devastating:1 y6:1 identical:2 represents:1 icml:1 others:1 few:4 randomly:3 simultaneously:1 kandola:1 individual:1 hvj:6 ourselves:1 attempt:1 evaluation:3 upperbounded:1 predefined:1 byproduct:1 indexed:3 tree:5 littlestone:4 instance:8 zn:1 deviation:2 subset:6 usefulness:1 inadequate:1 learnability:2 stored:1 dataindependent:2 combined:1 chooses:2 huji:1 off:1 ym:4 concrete:3 again:2 cesa:2 containing:1 choose:5 possibly:1 worse:1 sidestep:1 li:2 aggressive:2 potential:1 sec:3 bold:1 includes:1 jc:8 explicitly:2 later:1 h1:2 performed:3 analyze:1 doing:2 observing:3 haj:4 parallel:1 il:1 characteristic:2 identify:1 generalize:1 weak:1 handwritten:1 none:1 j6:1 minj:2 influenced:1 suffers:1 whenever:1 definition:4 against:1 naturally:3 associated:1 proof:2 sampled:2 newly:1 dataset:2 recall:1 improves:1 organized:1 pocket:1 manuscript:1 forgetron:1 supervised:1 evaluated:4 done:1 generality:1 stage:1 hand:2 receives:2 lent:1 lack:1 quality:1 building:2 effect:1 name:1 concept:2 contain:2 counterpart:1 inductive:1 hence:1 round:12 during:2 inferior:2 noted:1 criterion:2 presenting:1 demonstrate:2 performs:4 passive:2 percent:1 image:2 instantaneous:2 common:4 discriminants:1 overview:1 discussed:2 relating:1 pm:1 similarly:1 moving:1 access:1 j4:2 surface:1 add:1 irrelevant:1 driven:14 discard:1 occasionally:1 certain:1 inequality:2 arbitrarily:2 exploited:1 gentile:3 somewhat:1 relaxed:1 greater:1 additional:1 employed:1 cesabianchi:1 july:1 hsi:2 relates:2 desirable:1 technical:1 faster:1 h7:1 cross:1 prediction:7 variant:3 regression:2 expectation:4 kernel:14 represent:1 confined:1 justified:1 interval:6 addressed:1 grow:1 suffered:3 probably:1 isolate:1 tend:1 elegant:1 virtually:1 member:1 integer:3 practitioner:1 presence:1 revealed:1 split:2 easy:1 hb:1 variety:1 xj:1 fit:2 gave:1 restrict:2 competing:1 idea:3 multiclass:3 tohoku:1 whether:1 penalty:1 suffer:1 useful:1 tune:1 amount:1 induces:1 category:1 schapire:2 generate:6 outperform:1 percentage:1 zj:1 rosenblatt:1 discrete:3 drawn:1 capital:1 verified:1 ht:6 concreteness:1 merely:2 sum:2 year:1 convert:1 letter:6 powerful:1 place:1 throughout:1 family:4 reasonable:1 almost:1 h12:1 entirely:1 bound:10 hi:19 pay:1 fold:18 handful:1 generates:3 argument:5 relatively:1 according:2 icpr:1 combination:1 smaller:1 slightly:3 wi:3 pr:1 taken:1 computationally:1 previously:1 discus:1 turn:1 singer:5 serf:2 ofer:1 h6:1 apply:2 batch:29 existence:1 original:1 running:1 include:1 remaining:1 yoram:1 build:4 question:1 already:2 strategy:5 concentration:1 hai:5 exhibit:2 amongst:1 participate:1 reason:2 index:1 illustration:2 minimizing:1 hebrew:1 difficult:1 negative:2 implementation:1 motivates:1 perform:1 gallant:2 conversion:92 bianchi:2 upper:2 datasets:6 benchmark:1 finite:1 november:1 situation:1 y1:4 rn:2 arbitrary:1 introduced:1 pair:1 namely:2 cast:1 h9:1 z1:1 required:2 unpublished:1 nip:2 bar:4 below:2 pattern:8 xm:4 azuma:2 including:1 memory:7 power:3 predicting:1 improve:3 axis:2 hm:16 utterance:1 deviate:1 review:1 determining:1 freund:2 loss:19 expect:2 j8:2 generation:1 interesting:1 validation:1 foundation:1 h2:1 mercer:1 storing:5 share:1 bordes:1 row:1 last:6 english:1 offline:1 guide:1 side:1 formal:1 perceptron:4 fall:1 face:1 taking:1 default:6 evaluating:1 cumulative:2 concretely:2 made:1 transaction:1 emphasize:2 compact:1 keep:2 dealing:1 shwartz:2 alternatively:1 continuous:1 search:2 table:2 additionally:1 learn:1 hsj:5 obtaining:1 bottou:1 domain:3 did:1 linearly:1 oferd:1 x1:4 fig:3 referred:2 martingale:1 wiley:1 explicit:1 exponential:1 candidate:4 third:2 interleaving:1 bad:2 xt:3 jensen:1 exists:1 mnist:5 vapnik:1 effectively:1 keshet:1 budget:7 margin:4 depicted:1 simply:2 datadependent:1 conconi:3 applies:2 weston:1 goal:4 replace:1 shared:1 price:3 experimentally:1 change:1 lipschitz:1 determined:2 specifically:4 infinite:1 averaging:15 justify:1 lemma:10 vote:3 select:1 formally:5 support:8 arises:1 crammer:2 yardstick:1 h5:1 tested:1 |
1,955 | 2,776 | Modeling Neural Population Spiking Activity
with Gibbs Distributions
Frank Wood, Stefan Roth, and Michael J. Black
Department of Computer Science
Brown University
Providence, RI 02912
{fwood,roth,black}@cs.brown.edu
Abstract
Probabilistic modeling of correlated neural population firing activity is
central to understanding the neural code and building practical decoding
algorithms. No parametric models currently exist for modeling multivariate correlated neural data and the high dimensional nature of the data
makes fully non-parametric methods impractical. To address these problems we propose an energy-based model in which the joint probability of
neural activity is represented using learned functions of the 1D marginal
histograms of the data. The parameters of the model are learned using contrastive divergence and an optimization procedure for finding appropriate marginal directions. We evaluate the method using real data
recorded from a population of motor cortical neurons. In particular, we
model the joint probability of population spiking times and 2D hand position and show that the likelihood of test data under our model is significantly higher than under other models. These results suggest that our
model captures correlations in the firing activity. Our rich probabilistic
model of neural population activity is a step towards both measurement
of the importance of correlations in neural coding and improved decoding of population activity.
1
Introduction
Modeling population activity is central to many problems in the analysis of neural data.
Traditional methods of analysis have used single cells and simple stimuli to make the problems tractable. Current multi-electrode technology, however, allows the activity of tens or
hundreds of cells to be recorded simultaneously along with with complex natural stimuli
or behavior. Probabilistic modeling of this data is challenging due to its high-dimensional
nature and the correlated firing activity of neural populations. One can view the problem as
one of learning the joint probability P (s, r) of a stimulus or behavior s and the firing activity of a neural population r. The neural activity may be in the form of firing rates or spike
times. Here we focus the latter more challenging problem of representing a multivariate
probability distribution over spike times.
Modeling P (s, r) is made challenging by the high dimensional, correlated, and nonGaussian nature of the data. The dimensionality means that we are unlikely to have suf-
ficient training data for a fully non-parametric model. On the other hand no parametric
models currently exist that capture the one-sided, skewed nature of typical correlated neural data. We do, however, have sufficient data to model the marginal statistics of the data.
With that observation we draw on the FRAME model developed by Zhu and Mumford for
image texture synthesis [1] to represent neural population activity.
The FRAME model represents P (s, r) in terms of its marginal histograms. In particular
we seek the maximum entropy distribution that matches the observed marginals of P (s, r).
The joint is represented by a Gibbs model that combines functions of these marginals and
we exploit the method of [2] to automatically choose the optimal marginal directions. To
learn the parameters of the model we exploit the technique of contrastive divergence [3, 4]
which has been used previously to learn the parameters of Product-of-Experts (PoE) models
[5]. We observe that the FRAME model can be viewed as a Product of Experts where the
experts are functions of the marginal histograms. The resulting model is more flexible than
the standard PoE formulation and allows us to model more complex, skewed distributions
observed in neural data.
We train and test the model on real data recorded from a monkey performing a motor control task; details of the task and the neural data are described in the following section.
We learn a variety of probabilistic models including full Gaussian, independent Gaussian,
product of t-distributions [4], independent non-parametric, and the FRAME model. We
evaluate the log likelihood of test data under the different models and show that the complete FRAME model outperforms the other methods (note that ?complete? here means the
model uses the same number of marginal directions as there are dimensions in the data).
The use of energy-based models such as FRAME for modeling neural data appears novel
and promising, and the results reported here are easily extended to other cortical areas.
There is a need in the community for such probabilistic models of multi-variate spiking
processes. For example Bell and Para [6] formulate a simple model of correlated spiking
but acknowledge that what they would really like, and do not have, is what they call a
?maximum spikelihood? model. This neural modeling problem represents a new application of energy-based models and consequently suggests extensions of the basic methods.
Finally, there is a need for rich probabilistic models of this type in the Bayesian decoding
of neural activity [7].
2
Methods
The data used in this study consists of simultaneously recorded spike times from a population of M1 motor neurons recorded in monkeys trained to perform a manual tracking
task [8, 9]. The monkey viewed a computer monitor displaying a target and a feedback
cursor. The task involved moving a 2D manipulandum so that a cursor controlled by the
manipulandum came into contact with a target. The monkey was rewarded when the target
was acquired, a new target appeared and the process repeated. Several papers [9, 11, 10]
have reported successfully decoding the cursor kinematics from this data using firing rates
estimated from binned spike counts.
The activity of a population of cells was recorded at a rate of 30kHz then sorted using an
automated spike sorting method; from this we randomly selected five cells with which to
demonstrate our method.
(1)
(2)
(J)
(j)
As shown in Fig. 1, ri,k = [ti,k , ti,k , . . . , ti,k ] is a vector of time intervals ti,k that
represents the spiking activity of a single cell i at timestep k. These intervals are the
elapsed time between the time at timestep k and the time at each of j past spikes. Let
Rk = [r1,k , r2,k , . . . , rN,k ] be a vector concatenation of N such spiking activity representations. Let sk = [xk , yk ] be the position of the manipulandum at each timestep. Our
t(3)
i,k
t(2)
i,k
t(1)
i,k
time
sk=[xk ,yk ]
Figure 1: Representation of the data. Hand position at time k, sk = [xk , yk ], is regularly
sampled every 50ms. Spiking activity (shown as vertical bars) is retained at full data
acquisition precision (30khz). Sections of spike trains from four cells are shown. The
response of a single cell, i, is represented by the time intervals to the three preceding
(1) (2) (3)
spikes; that is, ri,k = [ti,k , ti,k , ti,k ].
training data consists of 4000 points Rk , sk sampled at 50ms intervals with a history of 3
past spikes (J = 3) per neuron. Our test data is 1000 points of the same.
Various empirical marginals of the data (shown in Fig 2) illustrate that the data are not
well fit by canonical symmetric parametric distributions because the data is asymmetric
and skewed. For such data traditional parametric models may not work well so instead we
apply the FRAME model of [1] to this modeling problem. FRAME is a semi-parametric
energy based model of the following form:
Let dk = [sk , Rk ], where sk and Rk are defined as above. Let D = [d1 , . . . , dN ] be a
matrix of N such points. We define
P (dk ) =
1 ? Pe ?Te ?(?eT dk )
e
Z(?)
(1)
where ?e is a vector that projects the datum dk onto a 1-D subspace, ? : R ? Ib is a
?histogramming? function that produces a vector with a single 1 in a single bin per datum according to the projected value of that datum, ?e ? Rb is a weight vector, Z is a
normalization constant sometimes called the partition function (as it is a function of the
model parameters), b is the granularity of the histogram, and e is the number of ?experts?.
Taken together, ?Te ?(?) can be thought of as a discrete representation of a function. In this
view ?Te ?(?eT dk ) is an energy function computed over a projection of the data. Models of
this form are constrained maximum entropy models, and in this case by adjusting ? e the
model marginal projection onto ?e is constrained to be identical (ideally) to the empirical
marginal over the same projection. Fig. 3 illustrates the model.
To relate this to current PoE models, if ?Te ?(?) were replaced with a log Student-t function
then this FRAME model would take the same form as the Product-of-Student-t formulation
P ofT [12].T Distributions of this form are called Gibbs or energy-based distributions as
e ?e ?(?e dk ) is analogous to the energy in a Boltzmann distribution. Minimizing the
Figure 2: Histograms of various projections of single cell data. The top row are histograms
of the values of t(1) , t(2) , t(3) , x, and y respectively. The bottom row are random projections
from the same data. All these figures illustrate skew or one-sidedness, and motivate our
choice of a semi-parametric Gibbs model.
!Td
!
...
T
d
?(!d) 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
?
*
2987432100000000
Figure 3: (left) Illustration of the projection and weighting of a single point d: Here, the
data point d is projected onto the projection direction ?. The isosurfaces from a hypothetical distribution p(d) are shown in dotted gray. (right) Illustration of the projection and
binning of d: The upper plot shows the empirical marginal (in dotted gray) as obtained
from the projection illustrated in the left figure. The function ?(?) takes a real valued projection and produces a vector of fixed length with a single 1 in the bin that is mapped to
that range of the projection. This discretization of the projection is indicated by the spacing of the downward pointing arrows. The resulting vector is weighted by ? to produce an
energy. This process is repeated for each of the projection directions in the model. The constraints induced by multiple projections result in a distribution very close to the empirical
distribution.
this energy is equivalent to maximizing the log likelihood.
Our model is parameterized by ? = {{?e , ?e } : 1 < e < E} where E is the total number
of projections (or ?experts?). We use gradient ascent on the log likelihood to train the ? e ?s.
As ?(?) is not differentiable, the ?e ?s must be specified or learned in another way.
2.1
Learning the ??s
Standard gradient ascent becomes intractable for large numbers of cells because computing the partition function and its gradient becomes intractable. The gradient of the log
probability with respect to ?1..E is
? log P (dk )
? log P (dk )
??? log P (dk ) = [
,...,
].
(2)
??1
??E
Besides not being able to normalize the distribution, the right hand term of the partial
? log Z(?)
? log P (dk )
= ?(?eT dk ) ?
??e
??e
typically has no closed-form solution and is very hard to compute.
Markov chain Monte Carlo (MCMC) techniques can be used to learn such models. Contrastive divergence [4] is an efficient learning algorithm for energy based models that approximates the gradient as
?
?
?
?
? log P (dk )
? log P (dk )
? log P (dk )
?
?
(3)
??e
??e
??e
P0
Pm
?
where P 0 is the training data and P?m are samples drawn according to the model. The
key is that the sampler is started at the training data and does not need to be run until
convergence, which typically would take much more time. The superscript indicates that
we use m regular Metropolis sampling steps [13] to draw samples from the model for
contrastive divergence training (m = 50 in our experiments).
The intuition behind this approximation is that samples drawn from the model should have
the same statistics as the training data. Maximizing the log probability of training data is
equivalent to minimizing the Kullback Leibler (KL) divergence between the model and the
true distribution. Contrastive divergence attempts to minimize the difference in KL divergence between the model one step towards equilibrium and the training data. Intuitively
this means that the contrastive divergence opposes any tendency for the model to diverge
from the true distribution.
2.2
Learning the ??s
Because ?(?) is not differentiable, we turn to the feature pursuit method of [2] to learn
the projection directions ?1..E . This approach involves successively searching for a new
projection in a direction where a model with the new projection would differ maximally
from the model without. Their approach involves approximating the expected projection
using a Parzen window method with Gaussian kernels. Gradient search on a KL-divergence
objective function is used to find each subsequent projection. We refer readers to [2] for
details.
It was suggested by [2] that there are many local optima in this feature pursuit. Our experience tends to support this claim. In fact, it may be that feature pursuit is not entirely
necessary. Additionally, in our experience, the most important aspect of the feature selection algorithm is how many feature pursuit starting points are considered. It may be as
effective (and certainly more efficient) to simply guess a large number of projections and
estimate the marginal KL-divergence for them all, selecting the largest as the new projection.
2.3
Normalizing the distribution
Generally speaking, the partition function is intractible to compute as it involves integration
over the entire domain of the joint; however, in the case where E (the number of experts)
is the same as the dimensionality of d then the partition function is tractable. Each expert
can be normalized individually. The per-expert normalization is
X
??(b)
e
Ze =
s(b)
e e
b
(b)
se is
where b indexes the elements of ?e and
the width of the bth bin of the eth histogramming function. Using the change of variables rule
Y
Z = |det(?)|
Ze
e
where the square matrix ? = [?1 ?2 . . . ?E ]. This is not possible when the number of
experts exceeds or is smaller than the dimensionality of the data.
POT
-31849
IG
-30893
G
-23573
RF
-23108
I
-19155
FP
-12509
Table 1: Log likelihoods of test data. The test data consists of the spiking activity of 5 cells
and x, y position behavioral variables as illustrated in Fig. 1. Log likelihoods are reported
for various models: POT: Product of Student-t, IG: diagonal covariance Gaussian, G: full
covariance Gaussian, RF: random filter FRAME, I: 5 independent FRAME models, one
per cell, and FP: feature pursuit FRAME
Empirical
FRAME
Gaussian
PoT
Figure 4: This figure illustrates the modeling power of the semi-parametric Gibbs distribution over a number of symmetric, fully parametric distributions. Each row shows normalized 2-d histograms of samples projected onto a plane. The first column is the training data,
column two is the Gibbs distribution, column three is a Gaussian distribution, and column
four is a Product-of-Student-t distribution.
3
Results
We trained several models on several datasets. We show results for complete models of the
joint neuronal response of 5 real motor cortex cells plus x, y hand kinematics (3 past spikes
for each cell plus 2 behavior variables equals a 17 dimension dataset). A complete model
has the same number of experts as dimensions.
Table 1 shows the log likelihood of test data under several models: Product of Studentt, a diagonal covariance multidimensional Gaussian (independent), multivariate Gaussian,
a complete FRAME model with random projection directions, a product of 5 complete
FRAME single cell models with learned projections, and a complete FRAME model with
learned projection directions. Because these all are complete models, we are able to compute the partition function of each. Each model was trained on 4000 points and the log
likelihood was computed using 1000 distinct test points.
In Fig. 4 we show histograms of samples drawn from a full covariance Gaussian and
energy-based models with two times more projection directions than the data dimensionality. These figures illustrate the modeling power of our approach in that it represents the
irregularities common to real neural data better than Gaussian and other symmetric distributions.
Note that the model using random marginal directions does not model the data as well as
one using optimized directions; this is not surprising. It may well be the case, however, that
with many more random directions such a model would perform significantly better. This
overcomplete case however is unnormalized and hence cannot be directly compared here.
4
Discussion
In this work we demonstrated an approach for using Gibbs distributions to model the joint
spiking activity of a population of cells and an associated behavior. We developed a novel
application of contrastive divergence for learning a FRAME model which can be viewed
as a semi-parametric Product-of-Experts model. We showed that our model outperformed
other models in representing complex monkey motor cortical spiking data.
Previous methods for probabilistically modeling spiking process have focused on modeling
the firing rates of a population in terms of a conditional intensity function (firing rate conditioned on various correlates and previous spiking) [15, 16, 17, 18, 19]. These functions are
often formulated in terms of log-linear models and hence resemble our approach. Here we
take a more direct approach of modeling the joint probability using energy-based models
and exploit contrastive divergence for learning
Information theoretic analysis of spiking populations calls for modeling high dimensional
joint and conditional distributions. In the work of [20, 21, 22], these distributions are used
to study encoding models, in particular the importance of correlation in the neural code.
Our models are directly applicable to this pursuit. Given an experimental design with a
relatively low dimension stimulus, where the entropy of that stimulus can be accurately
computed, our models are applicable without modification.
Our approach may also be applied to neural decoding. A straightforward extension of
our model could include hand positions (or other kinematic variables) at multiple time instants. Decoding algorithms that exploits these joint models by maximizing the likelihood
of the observed firing activity over an entire data set remain to be developed. Note that it
may be possible to produce more accurate models of the un-normalized joint probability
by increasing the number of marginal constraints. To exploit these overcomplete models,
algorithms that do not require normalized probabilities are required (particle filtering is a
good example).
Not surprisingly the FRAME model performed better on the non-symmetric neural data
than the related, but symmetric, Product-of-Student-t model. We have begun exploring
more flexible and asymmetric experts which would offer advantages over discrete histogramming inherent to the FRAME model.
Acknowledgments
Thanks to J. Donoghue, W. Truccolo, M. Fellows, and M. Serruya. This work was supported by NIH-NINDS R01 NS 50967-01 as part of the NSF/NIH Collaborative Research
in Computational Neuroscience Program.
References
[1] S. C. Zhu, Z. N. Wu, and D. Mumford, ?Minimax entropy principle and its application
to texture modeling,? Neural Comp., vol. 9, no. 8, pp. 1627?1660, 1997.
[2] C. Liu, S. C. Zhu, and H. Shum, ?Learning inhomogeneous Gibbs model of faces by
minimax entropy,? in ICCV, pp. 281?287, 2001.
[3] G. Hinton, ?Training products of experts by minimizing contrastive divergence,? Neural Comp., vol. 14, pp. 1771?1800, 2002.
[4] Y. Teh, M. Welling, S. Osindero, and G. E. Hinton, ?Energy-based models for sparse
overcomplete representations,? JMLR, vol. 4, pp. 1235?1260, 2003.
[5] G. Hinton, ?Product of experts,? in ICANN, vol. 1, pp. 1?6, 1999.
[6] A. J. Bell and L. C. Parra, ?Maximising sensitivity in a spiking network,? in Advances
in NIPS, vol. 17, pp. 121?128, 2005.
[7] R. S. Zemel, Q. J. M. Huys, R. Natarajan, and P. Dayan, ?Probabilistic computation
in spiking populations,? in Advances in NIPS, vol. 17, pp. 1609?1616, 2005.
[8] M. Serruya, N. Hatsopoulos, M. Fellows, L. Paninski, and J. Donoghue, ?Robustness
of neuroprosthetic decoding algorithms,? Biological Cybernetics, vol. 88, no. 3, pp.
201?209, 2003.
[9] M. D. Serruya, N. G. Hatsopoulos, L. Paninski, M. R. Fellows, and J. P. Donoghue,
?Brain-machine interface: Instant neural control of a movement signal,? Nature, vol.
416, pp. 141?142, 2002.
[10] W. Wu, M. J. Black, Y. Gao, E. Bienenstock, M. Serruya, A. Shaikhouni, and J. P.
Donoghue, ?Neural decoding of cursor motion using a Kalman filter,? in Advances in
NIPS, vol. 15, pp. 133?140, 2003.
[11] Y. Gao, M. J. Black, E. Bienenstock, S. Shoham, and J. P. Donoghue, ?Probabilistic
inference of arm motion from neural activity in motor cortex,? Advances in NIPS,
vol. 14, pp. 221?228, 2002.
[12] M. Welling, G. Hinton, and S. Osindero, ?Learning sparse topographic representations with products of Student-t distributions,? in Advances in NIPS, vol. 15, pp.
1359?1366, 2003.
[13] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis, 2nd ed.
Chapman & Hall/CRC, 2004.
[14] S. Roth and M. J. Black, ?Fields of experts: A framework for learning image priors,?
in CVPR, vol. 2, pp. 860?867, 2005.
[15] D. R. Brillinger, ?The identification of point process systems,? The Annals of Probability, vol. 3, pp. 909?929, 1975.
[16] E. S. Chornoboy, L. P. Schramm, and A. F. Karr, ?Maximum likelihood identification
of neuronal point process systems,? Biological Cybernetics, vol. 59, pp. 265?275,
1988.
[17] Y. Gao, M. J. Black, E. Bienenstock, W. Wu, and J. P. Donoghue, ?A quantitative
comparison of linear and non-linear models of motor cortical activity for the encoding and decoding of arm motions,? in First International IEEE/EMBS Conference on
Neural Engineering, pp. 189?192, 2003.
[18] M. Okatan, ?Maximum likelihood identification of neuronal point process systems,?
Biological Cybernetics, vol. 59, pp. 265?275, 1988.
[19] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown, ?A point
process framework for relating neural spiking activity to spiking history,? J. Neurophysiology, vol. 93, pp. 1074?1089, 2005.
[20] P. E. Latham and S. Nirenberg, ?Synergy, redundancy, and independence in population codes, revisited,? J. Neuroscience, vol. 25, pp. 5195?5206, 2005.
[21] S. Nirenberg and P. E. Latham, ?Decoding neuronal spike trains: How important are
correlations?? PNAS, vol. 100, pp. 7348?7353, 2003.
[22] S. Panzeri, H. D. R. Golledge, F. Zheng, M. Tovee, and M. P. Young, ?Objective
assessment of the functional role of spike train correlations using information measures,? Visual Cognition, vol. 8, pp. 531?547, 2001.
| 2776 |@word neurophysiology:1 nd:1 seek:1 covariance:4 p0:1 contrastive:9 liu:1 selecting:1 shum:1 outperforms:1 past:3 current:2 discretization:1 surprising:1 must:1 subsequent:1 partition:5 motor:7 plot:1 selected:1 manipulandum:3 guess:1 plane:1 xk:3 revisited:1 five:1 along:1 dn:1 direct:1 consists:3 combine:1 behavioral:1 acquired:1 expected:1 behavior:4 multi:2 brain:1 automatically:1 td:1 window:1 increasing:1 becomes:2 project:1 what:2 monkey:5 developed:3 finding:1 brillinger:1 impractical:1 fellow:4 every:1 hypothetical:1 ti:7 multidimensional:1 quantitative:1 tovee:1 control:2 okatan:1 engineering:1 local:1 fwood:1 tends:1 encoding:2 firing:9 black:6 plus:2 suggests:1 challenging:3 range:1 huys:1 practical:1 acknowledgment:1 shaikhouni:1 irregularity:1 procedure:1 area:1 empirical:5 bell:2 significantly:2 thought:1 projection:26 eth:1 shoham:1 regular:1 suggest:1 onto:4 close:1 selection:1 cannot:1 gelman:1 equivalent:2 demonstrated:1 roth:3 maximizing:3 straightforward:1 starting:1 focused:1 formulate:1 rule:1 population:17 searching:1 analogous:1 annals:1 target:4 us:1 element:1 ze:2 natarajan:1 asymmetric:2 binning:1 observed:3 bottom:1 role:1 capture:2 movement:1 hatsopoulos:2 yk:3 intuition:1 ideally:1 trained:3 motivate:1 easily:1 joint:11 represented:3 various:4 train:5 distinct:1 effective:1 monte:1 zemel:1 valued:1 cvpr:1 nirenberg:2 statistic:2 topographic:1 superscript:1 advantage:1 differentiable:2 propose:1 product:13 normalize:1 convergence:1 electrode:1 optimum:1 r1:1 produce:4 illustrate:3 bth:1 pot:3 c:1 involves:3 resemble:1 differ:1 direction:13 inhomogeneous:1 filter:2 bin:3 crc:1 require:1 truccolo:2 really:1 biological:3 parra:1 extension:2 exploring:1 considered:1 hall:1 panzeri:1 equilibrium:1 cognition:1 claim:1 pointing:1 outperformed:1 applicable:2 currently:2 individually:1 largest:1 successfully:1 weighted:1 stefan:1 gaussian:11 probabilistically:1 focus:1 likelihood:11 indicates:1 inference:1 dayan:1 unlikely:1 typically:2 entire:2 bienenstock:3 flexible:2 histogramming:3 constrained:2 integration:1 marginal:13 equal:1 field:1 sampling:1 chapman:1 identical:1 represents:4 stimulus:5 inherent:1 randomly:1 simultaneously:2 divergence:13 replaced:1 attempt:1 kinematic:1 zheng:1 certainly:1 behind:1 chain:1 accurate:1 partial:1 necessary:1 experience:2 overcomplete:3 column:4 modeling:16 hundred:1 osindero:2 reported:3 providence:1 para:1 thanks:1 international:1 sensitivity:1 probabilistic:8 decoding:10 diverge:1 michael:1 synthesis:1 together:1 parzen:1 nongaussian:1 central:2 recorded:6 successively:1 choose:1 expert:15 schramm:1 coding:1 student:6 performed:1 view:2 closed:1 collaborative:1 minimize:1 square:1 intractible:1 bayesian:2 identification:3 accurately:1 carlo:1 comp:2 cybernetics:3 history:2 manual:1 ed:1 energy:13 acquisition:1 pp:21 involved:1 associated:1 sampled:2 dataset:1 adjusting:1 begun:1 dimensionality:4 appears:1 higher:1 response:2 improved:1 maximally:1 formulation:2 correlation:5 until:1 hand:6 assessment:1 indicated:1 gray:2 building:1 brown:3 true:2 normalized:4 hence:2 symmetric:5 leibler:1 illustrated:2 skewed:3 width:1 unnormalized:1 m:2 complete:8 demonstrate:1 latham:2 theoretic:1 motion:3 interface:1 image:2 novel:2 nih:2 common:1 functional:1 spiking:17 khz:2 m1:1 relating:1 marginals:3 approximates:1 measurement:1 refer:1 gibbs:8 pm:1 particle:1 moving:1 cortex:2 multivariate:3 showed:1 sidedness:1 rewarded:1 came:1 preceding:1 signal:1 semi:4 full:4 multiple:2 pnas:1 exceeds:1 match:1 offer:1 controlled:1 basic:1 histogram:8 kernel:1 sometimes:1 represent:1 normalization:2 serruya:4 cell:15 embs:1 spacing:1 interval:4 ascent:2 induced:1 regularly:1 call:2 granularity:1 automated:1 variety:1 independence:1 variate:1 fit:1 carlin:1 donoghue:7 det:1 poe:3 studentt:1 ficient:1 speaking:1 generally:1 se:1 ten:1 exist:2 canonical:1 nsf:1 dotted:2 estimated:1 neuroscience:2 per:4 rb:1 discrete:2 vol:19 key:1 four:2 redundancy:1 eden:1 monitor:1 drawn:3 timestep:3 wood:1 run:1 parameterized:1 reader:1 wu:3 draw:2 ninds:1 entirely:1 datum:3 chornoboy:1 activity:23 binned:1 constraint:2 ri:3 aspect:1 performing:1 relatively:1 department:1 according:2 smaller:1 remain:1 metropolis:1 isosurfaces:1 modification:1 intuitively:1 iccv:1 sided:1 taken:1 previously:1 skew:1 count:1 kinematics:2 opposes:1 turn:1 tractable:2 pursuit:6 apply:1 observe:1 appropriate:1 robustness:1 top:1 include:1 instant:2 exploit:5 approximating:1 r01:1 contact:1 objective:2 spike:12 mumford:2 parametric:12 traditional:2 diagonal:2 gradient:6 subspace:1 mapped:1 concatenation:1 maximising:1 code:3 length:1 retained:1 besides:1 illustration:2 index:1 minimizing:3 kalman:1 frank:1 relate:1 design:1 stern:1 boltzmann:1 perform:2 teh:1 upper:1 vertical:1 neuron:3 observation:1 markov:1 datasets:1 acknowledge:1 extended:1 hinton:4 frame:19 rn:1 community:1 intensity:1 required:1 specified:1 kl:4 optimized:1 elapsed:1 learned:5 nip:5 address:1 able:2 bar:1 suggested:1 appeared:1 oft:1 fp:2 program:1 rf:2 including:1 power:2 natural:1 zhu:3 representing:2 minimax:2 arm:2 technology:1 started:1 prior:1 understanding:1 fully:3 suf:1 filtering:1 sufficient:1 rubin:1 displaying:1 principle:1 row:3 surprisingly:1 supported:1 karr:1 face:1 sparse:2 feedback:1 dimension:4 cortical:4 neuroprosthetic:1 rich:2 made:1 projected:3 ig:2 welling:2 correlate:1 kullback:1 synergy:1 search:1 un:1 sk:6 table:2 additionally:1 promising:1 learn:5 nature:5 complex:3 domain:1 icann:1 arrow:1 repeated:2 neuronal:4 fig:5 n:1 precision:1 position:5 pe:1 ib:1 jmlr:1 weighting:1 young:1 rk:4 r2:1 dk:14 normalizing:1 intractable:2 importance:2 texture:2 te:4 illustrates:2 downward:1 conditioned:1 cursor:4 sorting:1 entropy:5 simply:1 paninski:2 gao:3 visual:1 tracking:1 conditional:2 sorted:1 viewed:3 formulated:1 consequently:1 towards:2 hard:1 change:1 typical:1 sampler:1 called:2 total:1 tendency:1 experimental:1 support:1 latter:1 evaluate:2 mcmc:1 d1:1 correlated:6 |
1,956 | 2,777 | Stimulus Evoked Independent Factor Analysis of
MEG Data with Large Background Activity
S.S. Nagarajan
Biomagnetic Imaging Laboratory
Department of Radiology
University of California, San Francisco
San Francisco, CA 94122
[email protected]
H.T. Attias
Golden Metallic, Inc.
P.O. Box 475608
San Francisco, CA 94147
[email protected]
K.E. Hild
Biomagnetic Imaging Laboratory
Department of Radiology
University of California, San Francisco
San Francisco, CA 94122
[email protected]
K. Sekihara
Dept. of Systems Design and Engineering
Tokyo Metropolitan University
Asahigaoka 6-6, Hino, Tokyo 191-0065
[email protected]
Abstract
This paper presents a novel technique for analyzing electromagnetic
imaging data obtained using the stimulus evoked experimental paradigm.
The technique is based on a probabilistic graphical model, which describes the data in terms of underlying evoked and interference sources,
and explicitly models the stimulus evoked paradigm. A variational
Bayesian EM algorithm infers the model from data, suppresses interference sources, and reconstructs the activity of separated individual brain
sources. The new algorithm outperforms existing techniques on two real
datasets, as well as on simulated data.
1
Introduction
Electromagnetic source imaging, the reconstruction of the spatiotemporal activation of
brain sources from MEG and EEG data, is currently being used in numerous studies of
human cognition, both in normal and in various clinical populations [1]. A major advantage of MEG/EEG over other noninvasive functional brain imaging techniques, such as
fMRI, is the ability to obtain valuable information about neural dynamics with high temporal resolution on the order of milliseconds. An experimental paradigm that is very popular
in imaging studies is the stimulus evoked paradigm. In this paradigm, a stimulus, e.g., a
tone at a particular frequency and duration, is presented to the subject at a series of equally
spaced time points. Each presentation (or trial) produces activity in a set of brain sources,
which generates an electromagnetic field captured by the sensor array. These data constitute the stimulus evoked response, and analyzing them can help to gain insights into the
mechanism used by the brain to process the stimulus and similar sensory inputs. This paper
presents a new technique for analyzing stimulus evoked electromagnetic imaging data.
An important problem in analyzing such data is that MEG/EEG signals, which are captured
by sensors located outside the brain, contain not only signals generated by brain sources
evoked by the stimulus, but also interference signals, generated by other sources such as
spontaneous brain activity, eye blinks and other biological and non-biological sources of
artifacts. Interference signals overlap spatially and temporally with the stimulus evoked
signals, making it difficult to obtain accurate reconstructions of evoked brain sources. A
related problem is that signals from different evoked sources themselves overlap with each
other, making it difficult to localize individual sources and reconstruct their separate responses.
Many approaches have been taken to the
problem of suppressing interference signals.
One method is averaging over multiple trials,
which reduces the contributions from interference sources, assuming that they are uncorrelated with the stimulus and that their autocorrelation time scale is shorter than the
trial length. However, a successful application of this method requires a large number of
trials, effectively limiting the number of stimulus conditions per experiment. It usually
also requires manual rejection of trials containing conspicuous artifacts. A set of methods termed subspace techniques computes a
projection of the sensor data onto the signal
Figure 1: Simulation example (see text)
subspace, which corresponds to brain sources
of interest. However, these methods rely on
thresholding to determine the noise level, and tend to discard information below threshold.
Consequently, those methods perform well only when the interference level is low.
Independent component analysis (ICA) techniques [4-8], introduced more recently, attempt
to decompose the sensor data into a set of signals that are mutually statistically independent. Artifacts such as eye blinks are independent of brain source activity and ICA has been
able in many cases to successfully separate the two types of signals into distinct groups of
output variables. However, ICA techniques have several shortcomings. First, they require
pre-processing the sensor data to reduce dimensionality from, which causes loss of information on brain sources with relatively low amplitude. This is because, for K sensors,
ICA must learn a square K ? K unmixing matrix from N data points; typical values such
as K = 275, N = 700 can lead to poor performance due to local maxima, overfitting,
and slow convergence. Second, ICA assumes L + M = K 0 , where L, M are the number of evoked and interference sources and K 0 < K is the reduced input dimensionality.
However, many cases have L + M > K 0 , which leads to suboptimal and sometime failed
separation. Third, ICA requires post-processing of its output signals, usually via manual
examination by experts (though sometime by thresholding), to determine which signals
correspond evoked brain sources of interest.
The fourth drawback of ICA techniques is that, by design, they cannot exploit the advantage
offered by the evoked stimulus paradigm. Whereas interference sources are continuously
active, evoked sources become active at each trial only near the time of stimulus presentation, termed stimulus onset time. Hence, knowledge of the onset times can help separate
the evoked sources. However, the onset times, which are determined by the experimental
design and available during data analysis, are ignored by ICA.
In this paper we present a novel technique for suppressing interference signals and separating signals from individual evoked sources. The technique is based on a new probabilistic
graphical model termed stimulus evoked independent factor analysis (SEIFA). This model,
an extension of [2], describes the observed sensor data in terms of two sets of independent
variables, termed factors, which are not directly observable. The factors in the first set
represent evoked sources, and the factors in the second set represent interference sources.
The sensor data are generated by linearly combining the factors in the two sets using two
mixing matrices, followed by adding sensor noise. The mixing matrices and the precision
matrix of the sensor noise constitute the SEIFA model parameters, and are inferred from
data using a variational Bayesian EM algorithm [3], which computes their posterior distribution. Separation of the evoked sources is achieved in the course of processing by the
algorithm.
The SEIFA model is free from the above four
shortcomings. It can be applied directly to the sensor data without dimensionality reduction, therefore no information is lost. Rather than learning a square K ? K unmixing matrix, it learns
a K ? (L + M ) mixing matrix, where the number of interference factors M is minimized using automatic Bayesian model selection which is
part of the algorithm. In addition, SEIFA is designed to explicitly model the stimulus evoked
paradigm, hence it optimally exploits the knowledge of stimulus onset times. Consequently,
evoked sources are automatically identified and no
post-processing is required.
2 SEIFA
Probabilistic Graphical Model
This section presents the SEIFA probabilistic
graphical model, which is the focus of this paper.
The SEIFA model describes observed MEG sensor data in terms of three types of underlying, unobserved signals: (1) signals arising from stimulus evoked sources, (2) signals arising from interference sources, and (2) sensor noise signals. The
model is inferred from data by an algorithm presented in the next section. Following inference,
Figure 2: Performance on simulated
the model is used to separate the evoked source
data (see text)
signals from those of the interference sources and
from sensor noise, thus providing a clean version
of the evoked response. The model further separates the evoked response into statistically independent factors. In addition, it produces a regularized correlation matrix of the clean evoked
response and of each independent factors, which facilitates localization.
Let yin denote the signal recorded by sensor i = 1 : K at time n = 1 : N . We assume that
these signals arise from L evoked factors and M interference factors that are combined
linearly. Let xjn denote the signal of evoked factor j = 1 : L, and let ujn denote the
signal of interference factor j = 1 : M , both at time n. We use the term factor rather than
source for a reason explained below. Let Aij denote the evoked mixing matrix, and let Bij
denote the interference mixing matrix. Those matrices contain the coefficients of the linear
combination of the factors that produces the data. They are analogous to the factor loading
matrix in the factor analysis model. Let vin denote the noise signal on sensor i.
We use an evoked stimulus paradigm, where a stimulus is presented at a specific time,
termed the stimulus onset time, and is absent beforehand. The stimulus onset time is de-
fined as n = N0 + 1. The period preceding the onset n = 1 : N0 is termed pre-stimulus
period, and the period following the onset n = N0 + 1 : N is termed post-stimulus period.
We assume the evoked factors are active only post stimulus and satisfy xjn = 0 before its
onset. Hence
Bun + vn ,
n = 1 : N0
yn =
(1)
Axn + Bun + vn , n = N0 + 1 : N
To turn (1) into a probabilistic model, each signal must be modelled by a probability distribution. Here, each evoked factor is modelled by a mixture of Gaussian (MOG) distributions. For factor j we have a MOG model with Sj components, also termed states,
p(xn ) =
L
Y
p(xjn ) ,
p(xjn ) =
Sj
X
N (xjn | ?j,sj , ?j,sj )?j,sj
(2)
sj =1
j=1
State sj is a Gaussian with mean ?j,sj and precision ?j,sj , and its probability is ?j,sj . We
model the factors as mutually statistically independent.
There are three reasons for using MOG distributions, rather than Gaussians, to describe the
evoked factors. First, evoked brain sources are often characterized by spikes or by modulated harmonic functions, leading to non-Gaussian distributions. Second, previous work
on ICA has shown that independent Gaussian sources that are linearly mixed cannot be
separated. Since we aim to separate the evoked response into contributions from individual
factors, we must therefore use independent non-Gaussian factor distributions. Third, as is
well known, a MOG model with a suitably chosen number of states can describe arbitrary
distributions at the desired level of accuracy.
For interference signals and sensor noise we employ a Gaussian model. Each interference
factor is modelled by an independent, zero-mean Gaussian distribution with unit precision,
p(un ) =
M
Y
N (ujn | 0, 1) = N (un | 0, I)
(3)
j=1
The Gaussian model implies that we exploit only second order statistics of the interference
signals. This contrasts with the evoked signals, whose MOG model facilitates exploiting
higher order statistics, leading to more accurate reconstruction and to separation.
The sensor noise is modelled by a zero-mean Gaussian distribution with a diagonal precision matrix ?, p(vn ) = N (vn | 0, ?). From (1) we obtain p(yn | xn , un ) = p(vn ) where
we substitute vn = yn ? Axn ? Bun with xn = 0 for n = 1 : N0 . Hence, we obtain the
distribution of the sensor signals conditioned on the evoked and interference factors,
N (yn | Bun , ?),
n = 1 : N0
p(yn | xn , un , A, B) =
(4)
N (yn | Axn + Bun , ?), n = N0 + 1 : N
SEIFA also makes an i.i.d. assumption, meaning
the signals at different time points are
Q
independent. Hence p(y, x, u | A, B) = n p(yn | xn , un , A, B)p(xn )p(un ). where
y, x, u denote collectively the signals yn , xn , un at all time points. The i.i.d. assumption is
made for simplicity, and implies that the algorithm presented below can exploit the spatial
statistics of the data but not their temporal statistics.
To complete the definition of SEIFA, we must specify prior distributions over the model
parameters. For the noise precision matrix ? we choose a flat prior, p(?) = const. For the
mixing matrices A, B we choose to use a conjugate prior
Y
Y
p(A) =
N (Aij | 0, ?i ?j ) ,
p(B) =
N (Bij | 0, ?i ?j )
(5)
ij
ij
where all matrix elements are independent zero-mean Gaussians and the precision of the
ijth matrix element is proportional to the noise precision ?i on sensor i. It is the ? dependence which makes this prior conjugate. The proportionality constants ?j and ?j constitute
the parameters of the prior, a.k.a. hyperparameters. Eqs. (2,3,4,5) fully define the SEIFA
model.
3
Inferring the SEIFA Model from Data: A VB-EM Algorithm
This section presents an algorithm that infers the SEIFA model from data. SEIFA is a
probabilistic model with hidden variables, since the evoked and interference factors are
not directly observable, hence it must be treated in the EM framework. We use variational Bayesian EM (VB-EM), which has two relevant advantages over standard EM. First,
it is more robust to overfitting, which can be a significant problem when working with
high-dimensional but relatively short time series (here we analyze N < 1000 point long,
K = 275 dimensional data sequences). To achieve this robustness, VB-EM computes (using a variational approximation) a full posterior distribution over model parameters, rather
than a single MAP estimate. This means that VB-EM considers all possible parameters
values, and computes the probability of each value conditioned on the observed data. It
also performs automatic model order selection by optimizing the hyperparameters, and
consequently uses the minimum number of parameters needed to explain the data. Second,
VB-EM produces automatically regularized estimators for the evoked response correlation
matrices (required for source localization), where standard EM produces poorly conditioned ones. This is also a result of computing a parameter posterior.
VB-EM is an iterative algorithm, where each iteration consists of an E- and an M-step.
E-step. For the pre-stimulus period n = 1 : N0 we compute the posterior over the interference factors un only. It is a Gaussian distribution with posterior mean u
?n and covariance
? given by
? T ?yn ,
? T ?B
? + I + K?BB ?1
u
?n = ?B
?= B
(6)
? are ?BB are the posterior mean and covariance of the interference mixing matrix
where B
B computed in the M-step below (more precisely, the posterior covariance of the ith row
of B is ?BB /?i ).
For the post-stimulus period n = N0 + 1 : N we compute the posterior over the evoked
and interference factors xn , un , and the collective state sn of the evoked factors. The latter
is defined by the L-dimensional vector sn = (s1n , s2n , ..., sLn ), where sjn = 1Q: Sj is the
state of evoked factor j at time n. The total number of collective states is S = j Sj .
To simplify the notation, we combine the evoked and interference factors into a single
L0 ? 1 vector x0n = (xn , un ), where L0 = L + M , and their mixing matrices into a single
K ? L0 matrix A0 = (A, B). Now, at time n, let r run over all the S collective states. For
each r, the posterior over the factors conditioned on sn = r is Gaussian, with posterior
mean x
?rn , u
?rn and covariance ?r given by
?1
x
?0rn = ?r A?0T ?yn + ?r0 ?0r ,
?r = A?0T ?A?0 + ?r0 + K?
(7)
? B).
? The L ? 1 vector ?0r and the
xrn , u
?rn ) and A?0 = (A,
We have defined x
?0rn = (?
0
diagonal L ? L matrix ?r contain the means and precisions of the individual states (see (2))
composing r. The posterior mean and covariance A?0 , ? are computed in the M-step. Next,
compute the posterior probability that sn = r by
1
1
1 0 ?1 0
1 p
?
?rn =
?r | ?r || ?r | exp ? ynT ?yn + ?Tr ?r ?r ? x
?rn ?r x
?rn
(8)
zn
2
2
2
where zn is a normalization constant and ?r , ?r , ?r are the MOG parameters of (2).
M-step. We divide the model parameters into two sets. The first set includes the mixing
matrices A, B, for which we compute full posterior distributions. The second set includes
the noise precision ? and the diagonal hyperparameters matrices ?, ?, for which we compute MAP estimates. The posterior over A, B is Gaussian factorized over their rows, where
the mean is
?1
Rxx + ?
Rxu
A? = Ryx ?
,
?
=
(9)
T
? = Ryu ?
Rxu
Ruu + ?
B
and where the ith row of A0 = (A, B) has covariance ?/?i . The hyperparameters
?j , ?j are diagonal entries of diagonal matrices ?, ?. Ryx , Ryu , Rxx , Rxu , Ruu are posterior correlations
between theP
factors and the data and among the factors themselves, e.g.,
P
Ryx = n hyn xn i, Rxx = n hxn xn i, where h?i denotes posterior averaging. They are
easily computed in terms of the E-step quantities u
?n , x
?0rn , ?, ?r , ?
?rn and are omitted.
Next, the hyperparameter matrices ?, ? are updated by
?
? T ?B/K
?
??1 = diag A?T ?A/K
+ ?AA ,
? ?1 = diag B
+ ?BB
(10)
T
T
? yx
? yu
and the noise precision matrix by ??1 = diag(Ryy ? AR
? BR
)/N . ?AA and
?BB are the appropriate blocks of ? in (9). The interference mixing matrix and the noise
precision are initialized from pre-stimulus data. We used MOG parameters corresponding
to peaky (super-Gaussian) distributions.
j
Estimating and Localizing Clean Evoked Responses. Let zin
= hAij xjn i denote the
inferred individual contribution from evoked factor j to sensor signal i. It is given via
posterior averaging by
j
z?in
= A?ij x
?jn
(11)
P
where x
?n =
?r x
?rn . Computing this estimate amounts to obtaining a clean version
r?
of the individual contribution from each factor and of their combined contribution, and
removing contributions from interference factors and sensor noise.
The localization of individual evoked factors using sensor signals znj can be achieved by
many algorithms. In this paper, we use adaptive spatial filters that take data correlation
matrices as inputs for localization, because these methods have been
P shown to have superior
spatial resolution and non-zero localization bias [6]. Let C j = n hznj (znj )T i denote the
inferred sensor data correlation matrix corresponding to the individual contribution from
evoked factor j. Then,
C j = A?j (A?j )T + ??1 (?AA )jj (Rxx )jj
(12)
? Notice that the VB-EM approach
where A?j is a K ?1 vector denoting the jth column of A.
has produced a correlation matrix that is automatically regularized (due to the ?AA term)
and can be used for localization in its current form. In contrast, computing it from the signal
estimates obtained by other methods, such as PCA or ICA, yields a poorly conditioned
matrix that requires post-processing.
4
Experiments on Real and Simulated Data
Simulations. Fig. 1 shows a simulation with two evoked sources and three interference
sources with N = 10000, signal-to-interference (SIR) of 0 dB and signal-to-sensor-noise
(SNR) of 5dB. The true locations of the evoked sources, each of which is denoted by ?, and
the true locations of the background sources, each of which is denoted by ? are shown in
the top left panel. The right column in the top row shows the time courses of the evoked
sources as they appear at the sensors. The time courses of the actual sensor signals, which
also include the effects of background sources and sensor noise, are shown in the middle
row (right column). The bottom row shows the localization and time-course of cleaned
Figure 3: Estimating auditory-evoked responses from small trial averages (see text)
evoked sources estimated using SEIFA, which agrees with the true location and timecourse. Fig. 2 shows the mean performance as a function of SIR, across 50 Monte Carlo
trials for N = 1000 and SNR of 10 dB, for different locations of evoked and interference
sources. Denoising performance is quantified by the output signal-to-(noise+interference)
ratio (SNIR) and shown in the top panel. SEIFA outperforms both our benchmark methods, providing a 5-10 dB improvement over JADE [7] and SVD. Separation performance
of individual evoked factors is quantified by (separated-signal)-to-(noise+interference) ratio (SSNIR) (definition omitted) and is shown in the middle panel. SEIFA far outperforms
JADE for this set of examples. JADE is able to separate the background sources from the
evoked sources (hence gives good denoising performance), but it is not always able to separate the evoked sources from each other. The Infomax algorithm [4] (results not shown)
exhibited poor separation performance similar to JADE. Finally, localization performance
is quantified by the mean distance in cm between the true evoked source locations and the
estimated locations, as shown in the bottom panel. Here too, SEIFA far outperforms all
other methods, especially for low SIR. Notably, SEIFA performance appears to be quite
robust to the i.i.d. assumption of the evoked and background sources, because in these
simulations evoked sources were assumed to be damped sinusoids and interference sources
were sinusoids.
4.1 Real Data
Denoising averages from small number of trials. Auditory evoked responses from a
particular subject obtained by averaging different number of trials are shown in figure 3
(left panel). SEIFA is able to clearly recover responses even from small trial averages.
To quantify the performance of the different methods, a filtered version of the raw data
for Navg = 250 was assumed as ?ground-truth?, and is shown in the inset of the right
panel. The output SNIR as a function of Navg is also shown in figure 3 (right panel).SEIFA
exhibits the best performance especially for small trial averages.
Separation of evoked sources. To highlight SEIFA?s ability to separately localize evoked
sources, we conducted an experiment involving simultaneous presentation of auditory and
somatosensory stimuli. We expected the activation of contralateral auditory and somatosensory cortices to overlap in time. A pure tone (400ms duration, 1kHz, 5 ms ramp up/down)
was presented binaurally with a delay of 50 ms following a pneumatic tap on the left index finger. Averaging is performed over Navg = 100 trials triggered on the onset of the
tap. Results from SEIFA for this experiment are shown in Figure 4. In these figures, one
panel shows a contour map that shows the polarity and magnitude of the denoised and raw
sensor signals in sensor space. The contour plot of the magnetic field on the sensor array,
corresponding to the mapping of three-dimensional sensor surface array to points within a
circle, shows the magnetic field profile at a particular instant of time relative to the stimulus presentation. Other panels show localization of a particular evoked factor overlaid on
the subjects? MRI. Three orthogonal projections - axial, sagittal and coronal MRI slices,
that highlight all voxels having activity that is > 80% of maximum are shown. Results
are based on the right hemisphere channels above contralateral somatosensory and auditory cortices. Localization of time-course of the first two factors estimated by SEIFA are
shown in left and middle panels of figure 4. The first two factors localize to primary somatosensory cortex (SI), however with differential latencies. The first factor shows a peak
response at a latency of 50 ms, whereas the second factor shows the response at a later
latency. Interestingly, the third factor localizes to auditory cortex and the extracted timecourse corresponds well to an auditory evoked response that is well-separated from the
somatosensory response (figure 3 right panels).
Figure 4: Estimated SEIFA factors for auditory-somatosensory experiment
5
Extensions
Whereas this paper uses fixed values for the number of evoked and interference sources
L, M (though the effective number of interference sources was determined via optimizing
the hyperparameter ?), VB-EM facilitates inferring them from data, and we plan to investigate the effectiveness of this procedure. We also plan to infer the distribution of evoked
sources (MOG parameters) from data rather than using a fixed distribution. Another extension that could enhance performance is exploiting temporal correlation in the data. We plan
to do it by incorporating temporal (e.g., autoregressive) models into the source distributions
and infer their parameters from data.
References
[1] S. Baillet, J. C. Mosher, and R. M. Leahy. Electromagnetic brain mapping.Signal Processing
Magazine, 18:14-30, 2001.
[2] H. Attias (1999). Independent Factor Analysis. Neur. Comp. 11, 803-851.
[3] H. Attias (2000). A variational Bayes framework for graphical models. Adv. Neur. Info. Proc.
Sys. 12, 209-215.
[4] T.-P. Jung, S. Makeig, M. Westerfield, J. Townsend, E. Courchesne, T.J. Sejnowski (2000). Removal of eye artifacts from visual event related potentials in normal and clinical subjects. J. Clin.
Neurophys. 40, 516-520.
[5] S. Makeig, S. Debener, J. Onton, A. Delorme (2004). Mining event related brain dynamics.
Trends Cog. Sci. 8, 204-210.
[6] K. Sekihara, S. Nagarajan, D. Poeppel, A. Marantz, Y. Miyashita (2001). Reconstructing spatiotemporal activities of neural sources using a MEG vector beamformer technique. IEEE Trans.
Biomed. Eng. 48, 760-771.
[7] J.F.Cardoso (1999) High-order contrasts for independent component analysis, Neural Computation, 11(1):157?192.
[8] R. Vigario, J. Sarela, V. Jousmaki, M. Hamalainen, E. Oja (2000). Independent component approach to the analysis of EEG and MEG recordings. IEEE Trans. Biomed. Eng. 47, 589-593.
| 2777 |@word trial:13 version:3 sri:1 middle:3 loading:1 mri:2 suitably:1 bun:5 proportionality:1 simulation:4 covariance:6 eng:2 tr:1 reduction:1 series:2 mosher:1 denoting:1 suppressing:2 interestingly:1 outperforms:4 existing:1 current:1 com:1 neurophys:1 activation:2 si:1 must:5 mrsc:1 designed:1 plot:1 n0:10 tone:2 sys:1 ith:2 short:1 filtered:1 location:6 become:1 differential:1 consists:1 combine:1 autocorrelation:1 westerfield:1 notably:1 expected:1 ica:10 themselves:2 axn:3 brain:16 automatically:3 actual:1 estimating:2 underlying:2 notation:1 panel:11 factorized:1 cm:1 suppresses:1 unobserved:1 temporal:4 golden:1 makeig:2 unit:1 yn:11 appear:1 before:1 engineering:1 local:1 analyzing:4 quantified:3 evoked:67 statistically:3 lost:1 block:1 procedure:1 projection:2 pre:4 sln:1 onto:1 cannot:2 selection:2 xrn:1 map:3 duration:2 resolution:2 simplicity:1 pure:1 insight:1 estimator:1 array:3 courchesne:1 leahy:1 population:1 analogous:1 limiting:1 updated:1 spontaneous:1 magazine:1 us:2 element:2 trend:1 peaky:1 located:1 observed:3 bottom:2 adv:1 binaurally:1 valuable:1 dynamic:2 localization:10 easily:1 various:1 finger:1 separated:4 distinct:1 shortcoming:2 describe:2 monte:1 coronal:1 effective:1 sejnowski:1 outside:1 hino:1 jade:4 whose:1 quite:1 ramp:1 reconstruct:1 ability:2 statistic:4 radiology:3 advantage:3 sequence:1 triggered:1 reconstruction:3 relevant:1 combining:1 mixing:10 poorly:2 achieve:1 exploiting:2 convergence:1 produce:5 unmixing:2 help:2 ac:1 axial:1 ij:3 eq:1 implies:2 somatosensory:6 quantify:1 drawback:1 tokyo:2 filter:1 human:1 require:1 nagarajan:2 electromagnetic:5 decompose:1 biological:2 extension:3 hild:2 ground:1 normal:2 exp:1 overlaid:1 cognition:1 mapping:2 major:1 omitted:2 sometime:2 proc:1 currently:1 agrees:1 successfully:1 metropolitan:1 clearly:1 sensor:32 gaussian:13 always:1 aim:1 super:1 rather:5 l0:3 focus:1 improvement:1 contrast:3 inference:1 znj:2 a0:2 hidden:1 biomed:2 among:1 denoted:2 plan:3 spatial:3 field:3 having:1 yu:1 fmri:1 minimized:1 ryy:1 stimulus:31 simplify:1 employ:1 oja:1 individual:10 attempt:1 interest:2 investigate:1 mining:1 mixture:1 damped:1 accurate:2 beforehand:1 shorter:1 orthogonal:1 snir:2 divide:1 initialized:1 desired:1 circle:1 column:3 ar:1 localizing:1 zn:2 ijth:1 entry:1 snr:2 contralateral:2 delay:1 successful:1 conducted:1 too:1 optimally:1 spatiotemporal:2 combined:2 peak:1 probabilistic:6 rxu:3 infomax:1 enhance:1 continuously:1 recorded:1 reconstructs:1 containing:1 choose:2 expert:1 leading:2 potential:1 de:1 includes:2 coefficient:1 inc:1 satisfy:1 explicitly:2 onset:10 performed:1 later:1 analyze:1 recover:1 denoised:1 bayes:1 vin:1 contribution:7 sarela:1 square:2 accuracy:1 ynt:1 spaced:1 correspond:1 yield:1 blink:2 modelled:4 bayesian:4 raw:2 produced:1 carlo:1 comp:1 cc:1 explain:1 simultaneous:1 manual:2 definition:2 poeppel:1 frequency:1 gain:1 auditory:8 popular:1 knowledge:2 infers:2 dimensionality:3 amplitude:1 appears:1 higher:1 response:15 specify:1 pneumatic:1 box:1 though:2 correlation:7 working:1 artifact:4 effect:1 contain:3 true:4 hence:7 sinusoid:2 spatially:1 laboratory:2 during:1 m:4 complete:1 performs:1 meaning:1 variational:5 harmonic:1 novel:2 recently:1 superior:1 functional:1 khz:1 jp:1 significant:1 automatic:2 cortex:4 surface:1 posterior:17 optimizing:2 hemisphere:1 discard:1 termed:8 captured:2 minimum:1 preceding:1 r0:2 determine:2 paradigm:8 period:6 signal:41 multiple:1 full:2 reduces:1 infer:2 baillet:1 characterized:1 clinical:2 fined:1 long:1 post:6 equally:1 zin:1 involving:1 mog:8 vigario:1 iteration:1 represent:2 normalization:1 achieved:2 background:5 whereas:3 addition:2 separately:1 beamformer:1 source:56 exhibited:1 subject:4 tend:1 recording:1 facilitates:3 db:4 ruu:2 effectiveness:1 near:1 ryx:3 identified:1 suboptimal:1 reduce:1 haij:1 br:1 attias:3 absent:1 pca:1 hxn:1 cause:1 constitute:3 jj:2 ignored:1 latency:3 cardoso:1 amount:1 ujn:2 reduced:1 millisecond:1 notice:1 estimated:4 arising:2 per:1 hyperparameter:2 rxx:4 group:1 four:1 threshold:1 localize:3 clean:4 imaging:7 run:1 fourth:1 x0n:1 vn:6 separation:6 vb:8 followed:1 activity:7 precisely:1 flat:1 generates:1 relatively:2 department:2 neur:2 combination:1 poor:2 conjugate:2 describes:3 across:1 em:14 reconstructing:1 conspicuous:1 making:2 explained:1 interference:36 taken:1 mutually:2 turn:1 mechanism:1 needed:1 available:1 gaussians:2 appropriate:1 magnetic:2 s2n:1 robustness:1 jn:1 substitute:1 assumes:1 denotes:1 top:3 include:1 graphical:5 clin:1 instant:1 const:1 yx:1 exploit:4 especially:2 quantity:1 spike:1 primary:1 dependence:1 diagonal:5 exhibit:1 hamalainen:1 subspace:2 distance:1 separate:8 simulated:3 separating:1 sci:1 considers:1 reason:2 assuming:1 meg:7 length:1 index:1 polarity:1 providing:2 ratio:2 difficult:2 info:1 design:3 collective:3 perform:1 datasets:1 benchmark:1 rn:11 arbitrary:1 jousmaki:1 inferred:4 introduced:1 required:2 cleaned:1 timecourse:2 tap:2 california:2 delorme:1 ryu:2 trans:2 miyashita:1 able:4 usually:2 below:4 overlap:3 event:2 treated:1 rely:1 examination:1 regularized:3 townsend:1 localizes:1 eye:3 numerous:1 temporally:1 sn:4 text:3 prior:5 voxels:1 removal:1 relative:1 sir:3 loss:1 fully:1 highlight:2 mixed:1 proportional:1 hyn:1 sagittal:1 offered:1 thresholding:2 uncorrelated:1 row:6 course:5 jung:1 free:1 jth:1 aij:2 bias:1 slice:1 noninvasive:1 xn:11 contour:2 computes:4 sensory:1 autoregressive:1 made:1 adaptive:1 san:5 far:2 bb:5 sj:12 observable:2 overfitting:2 active:3 assumed:2 francisco:5 thep:1 un:10 iterative:1 learn:1 channel:1 robust:2 ca:3 composing:1 obtaining:1 eeg:4 metallic:1 diag:3 sekihara:2 linearly:3 noise:18 arise:1 hyperparameters:4 profile:1 sjn:1 marantz:1 fig:2 slow:1 precision:11 inferring:2 third:3 learns:1 bij:2 removing:1 down:1 cog:1 specific:1 inset:1 incorporating:1 biomagnetic:2 adding:1 effectively:1 magnitude:1 conditioned:5 rejection:1 yin:1 xjn:6 visual:1 failed:1 collectively:1 aa:4 corresponds:2 truth:1 extracted:1 presentation:4 consequently:3 typical:1 determined:2 averaging:5 denoising:3 total:1 experimental:3 svd:1 s1n:1 latter:1 modulated:1 ucsf:2 dept:1 |
1,957 | 2,778 | From Weighted Classification to Policy Search
D. Blatt
Department of Electrical Engineering
and Computer Science
University of Michigan
Ann Arbor, MI 48109-2122
[email protected]
A. O. Hero
Department of Electrical Engineering
and Computer Science
University of Michigan
Ann Arbor, MI 48109-2122
[email protected]
Abstract
This paper proposes an algorithm to convert a T -stage stochastic decision
problem with a continuous state space to a sequence of supervised learning problems. The optimization problem associated with the trajectory
tree and random trajectory methods of Kearns, Mansour, and Ng, 2000,
is solved using the Gauss-Seidel method. The algorithm breaks a multistage reinforcement learning problem into a sequence of single-stage reinforcement learning subproblems, each of which is solved via an exact
reduction to a weighted-classification problem that can be solved using
off-the-self methods. Thus the algorithm converts a reinforcement learning problem into simpler supervised learning subproblems. It is shown
that the method converges in a finite number of steps to a solution that
cannot be further improved by componentwise optimization. The implication of the proposed algorithm is that a plethora of classification methods can be applied to find policies in the reinforcement learning problem.
1
Introduction
There has been increased interest in applying tools from supervised learning to problems
in reinforcement learning. The goal is to leverage techniques and theoretical results from
supervised learning for solving the more complex problem of reinforcement learning [3].
In [6] and [4], classification was incorporated into approximate policy iterations. In [2],
regression and classification are used to perform dynamic programming. Bounds on the
performance of a policy which is built from a sequence of classifiers were derived in [8]
and [9].
Similar to [8], we adopt the generative model assumption of [5] and tackle the problem of
finding good policies within an infinite class of policies, where performance is evaluated
in terms of empirical averages over a set of trajectory trees. In [8] the T-step reinforcement
learning problem was converted to a set of weighted classification problems by trying to fit
the classifiers to the maximal path on the trajectory tree of the decision process.
In this paper we take a different approach. We show that while the task of finding the global
optimum within a class of non-stationary policies may be overwhelming, the componentwise search leads to single step reinforcement learning problems which can be reduced
to a sequence of weighted classification problems. Our reduction is exact and is differ-
ent from the one proposed in [8]; it gives more weight to regions of the state space in
which the difference between the possible actions in terms of future reward is large, rather
than giving more weight to regions in which the maximal future reward is large. The
weighted classification problems can be solved by applying weights-sensitive classifiers or
by further reducing the weighted classification problem to a standard classification problem
using re-sampling methods (see [7], [1], and references therein for a description of both approaches). Based on this observation, an algorithm that converts the policy search problem
into a sequence of weighted classification problems is given. It is shown that the algorithm
converges in a finite number of steps to a solution, which cannot be further improved by
changing the control of a single stage while holding the rest of the policy fixed.
2
Problem Formulation
The results are presented in the context of MDPs but can be applied to POMDPs and nonMarkovian decision processes as well. Consider a T-step MDP M = {S, A, D, Ps,a },
where S is a (possibly continuous) state space, A = {0, . . . , L ? 1} is a finite set of
possible actions, D is the distribution of the initial state, and Ps,a is the distribution of the
next state given that the current state is s and the action taken is a. The reward granted
when taking action a at state s and making a transition to state s? is assumed to be a known
deterministic and bounded function of s? denoted by r : S ? [?M, M ]. No generality
is lost in specifying a known deterministic reward since it is possible to augment the state
variable by an additional random component whose distribution depends on the previous
state and action, and specify the function r to extract this random component. Denote by
S0 , S1 , . . . , ST the random state variables.
A non-stationary deterministic policy ? = (?0 , ?1 , . . . , ?T ?1 ) is a sequence of mappings
?t : S ? A, which are called controls. The control ?t specifies the action taken at time
t as a function of the state at time t. The expected sum of rewards of a non-stationary
deterministic policy ? is given by
( T
)
X
V (?) = E?
r (St ) ,
(1)
t=1
where the expectation is taken with respect to the distribution over the random state variables induced by the policy ?. We call V (?) the value of policy ?. Non-stationary deterministic policies are considered since the optimal policy for a finite horizon MDP is
non-stationary and deterministic [10]. Usually the optimal policy is defined as the policy
that maximizes the value conditioned on the initial state, i.e.,
( T
)
X
V? (s) = E?
R (St ) |S0 = s ,
(2)
t=1
for any realization s of S0 [10]. The policy that maximizes the conditional value given
each realization of the initial state also maximizes the value averaged over the initial state,
and it is the unique maximizer if the distribution of the initial state D is positive over S.
Therefore, when optimizing over all possible policies, the maximization of (1) and (2) are
equivalent. When optimizing (1) over a restricted class of policies, which does not contain
the optimal policy, the distribution over the initial state specifies the importance of different
regions of the state space in terms of the approximation error. For example, assigning high
probability to a certain region of S will favor policies that well approximate the optimal
policy over that region. Alternatively, maximizing (1) when D is a point mass at state s is
equivalent to maximizing (2).
Following the generative model assumption of [5], the initial distribution D and the conditional distribution Ps,a are unknown but it is possible to generate realization of the initial
state according to D and the next state according to Ps,a for arbitrary state-action pairs
(s, a). Given the generative model, n trajectory trees are constructed in the following manner. The root of each tree is a realization of S0 generated according to the distribution
D. Given the realization of the initial state, realizations of the next state S1 given the L
possible actions, denoted by S1 |a, a ? A, are generated. Note that this notation omits
the dependence on the value of the initial state. Each of the L realizations of S1 is now
the root of the subtree. These iterations continue to generate a depth T tree. Denote by
St |i0 , i1 , . . . , it?1 the random variable generated at the node that follows the sequence of
actions i0 , i1 , . . . , it?1 . Hence, each tree is constructed using a single call to the initial state
generator and LT ? 2 calls to the next state generator.
Figure 1: A binary trajectory tree.
Consider a class of policies ?, i.e., each element of ? is a sequence of T mappings from S
to A. It is possible to estimate the value of any policy in the class from the set of trajectory
trees by simply averaging the sum of rewards on each tree along the path that agrees with
the policy [5]. Denote by Vb i (?) the observed value on the i?th tree along the path that
corresponds to the policy ?. Then the value of the policy ? is estimated by
Vbn (?) = n?1
n
X
i=1
Vb i (?).
(3)
In [5], the authors show that with high probability (over the data set) Vbn (?) converges
uniformly to V (?) (1) with rates that depend on the VC-dimension of the policy class.
This result motivates the use of policies ? with high Vbn (?), since with high probability
these policies have high values of V (?). In this paper, we consider the problem of finding
policies that obtain high values of Vbn (?).
3
A Reduction From a Single Step Reinforcement Learning Problem
to Weighted Classification
The building block of the proposed algorithm is an exact reduction from a single step reinforcement learning to a weighted classification problem. Consider the single step decision
process. An initial state S0 generated according to the distribution D is followed by one
of L possible actions A ? {0, 1, . . . , L ? 1}, which leads to a transition to state S1 whose
conditional distribution given the initial state is s and the action is a is given by Ps,a . Given
a class of policies ?, where policy in ? is a map from S to A, the goal is to find
?
b ? arg max Vbn (?).
(4)
???
In this single step problem the data are n realization of the random element
{S0 , S1 |0, S1 |1, . . . , S1 |L ? 1}. Denote the i?th realization by {si0 , si1 |0, si1 |1, . . . , si1 |L ?
1}. In this case, Vbn (?) can be written explicitly by
(L?1
)
X
b
Vn (?) = En
r(S1 |l)I(?(S0 ) = l) ,
(5)
l=0
where
a function f , En {f (S0 , S1 |0, S1 |1, . . . , S1 |L ? 1)} is its empirical expectation
Pfor
n
n?1 i=1 f (si0 , si1 |0, si1 |1, . . . , si1 |L ? 1), and I(?) is the indicator function taking a value
of one when its argument is true and zero otherwise.
The following proposition shows that the problem of maximizing the empirical reward (5)
is equivalent to a weighted classification problem.
Proposition 1 Given a class of policies ? and a set of n trajectory trees,
(L?1
)
X
r(S1 |l)I(?(S0 ) = l)
arg max En
???
l=0
)
(L?1
X
= arg min En
max r(S1 |k) ? r(S1 |l) I(?(S0 ) = l) .
???
l=0
k
(6)
The proposition implies that the maximizer of the empirical reward over a class of policies
is the output of an optimal weights dependent classifier for the data set:
n
i
i
i
s0 , arg max r(s1 |k), w
,
k
i=1
where for each sample, the first argument is the example, the second is the label, and
i
i
i
i
i
i
i
w = max r(s1 |k) ? r(s1 |0), max r(s1 |k) ? r(s1 |1), . . . , max r(s1 |k) ? r(s1 |L ? 1)
k
k
k
is the realization of the L costs of classifying example i to each of the possible labels.
Note that the realizations of the costs are always non-negative and the cost of the correct
classification (arg maxk r(si1 |k)) is always zero. The solution to the weighted classification
problem is a map from S to A which minimizes the empirical weighted misclassification
error (6). The proposition asserts that this mapping is also the control which maximizes the
empirical reward (5).
Proof 1 For all j ? {0, 1, . . . , L ? 1},
L?1
X
r(S1 |l)I(?(S0 ) = l) = r(S1 |j) + (r(S1 |0) ? r(S1 |j))I(?(s) = 0) +
(7)
l=0
(r(S1 |1) ? r(S1 |j))I(?(s) = 1) + . . . + (r(S1 |L ? 1) ? r(S1 |j))I(?(s) = L ? 1).
In addition,
En
(L?1
X
l=0
)
r(S1 |l)I(?(S0 ) = l)
=
En
En
En
(
I(arg max r(S1 |k) = 0)
k
(
)
r(S1 |l)I(?(S0 ) = l)
l=0
I(arg max r(S1 |k) = 1)
k
(
L?1
X
L?1
X
)
r(S1 |l)I(?(S0 ) = l)
l=0
I(arg max r(S1 |k) = L ? 1)
k
L?1
X
+
+ ... +
)
r(S1 |l)I(?(S0 ) = l) .
l=0
Substituting (7) we obtain
(L?1
)
X
En
r(S1 |l)I(?(S0 ) = l) =
l=0
L?1
X
En {I(arg max r(S1 |k) = j)[r(S1 |j) ?
k
j=0
(max r(S1 |k) ? r(S1 |0))I(?(S0 ) = 0) ?
k
(max r(S1 |k) ? r(S1 |1))I(?(S0 ) = 1) ? . . . ?
k
(max r(S1 |k) ? r(S1 |L ? 1))I(?(S0 ) = L ? 1)]} =
k
L?1
X
j=0
En I(arg max r(S1 |k) = j)r(S1 |j) ?
k
(L?1
)
X
En
max R(S1 |k) ? R(S1 |l) I(?(S0 ) = l)
l=0
k
The term in the second to last line is independent of ?(s) and the result follows.
In the binary case, the optimization problem is
arg min En |r(S1 |0) ? r(S1 |1)|I(?(S0 ) 6= arg max r(S1 |k)) ,
???
k
i.e., the single step reinforcement learning problem reduces to the weighted classification
problem with samples
n
i
i
i
i
s0 , arg max r(s1 |k), |r(s1 |0) ? r(s1 |1)|
,
k?{0,1}
i=1
where for each sample, the first argument is the example, the second is the label, and the
third is a realization of the cost incurred when misclassifying the example. Note that this
is different from the reduction in [8]. When applying the reduction in [8] to our single step
problem the costs are taken to be maxk?{0,1} r(si1 |k) rather than |r(si1 |0) ? r(si1 |1)|. Setting the costs to maxk?{0,1} r(si1 |k) instead of |r(si1 |0) ? r(si1 |1)| favors classifiers which
perform well in regions where the maximal reward is large (regardless of the difference
between the two actions) instead of regions where the difference between the rewards that
result from the two actions is large. It is easy to set an example of a simple MDP and a
restricted class of policies, which do not include the optimal policy, in which the classifier
that minimizes the weighted misclassification problem with costs maxk?{0,1} r(si1 |k) is
not equivalent to the optimal policy. When using our reduction, they are always equivalent.
On the other hand, in [8] the choice maxk?{0,1} r(si1 |k) led to a bound on the performance of the policy in terms of the performance of the classifier. We do not pursue this
type of bounds here since given the classifier, the performance of the resulting policy can
be directly estimated from (5). Given a sequence of classifiers, the value of the induced
sequence of controls (or policy) can be estimated directly by (3) with generalization guarantees provided by the bounds in [5]. In [2], a certain single step binary reinforcement
learning problem is converted to weighted classification by averaging multiple realizations
of the rewards under the two possible actions for each state. As seen here, this Monte Carlo
approach is not necessary; it is sufficient to sample the rewards once for each state.
4
Finding Good Policies for a T -Step Markov Decision Processes By
Solving a Sequence of Weighted Classification Problems
Given the class of policies ?, the algorithm updates the controls ?0 , . . . , ?T ?1 one at a time
in a cyclic manner while holding the rest constant. Each update is formulated as a single
step reinforcement learning problem which is then converted to a weighted classification
problem. In practice, if the weighted classification problem is only approximately solved,
then the new control is accepted only if it leads to higher value of Vb . When updating ?t , the
trees are pruned from the root to stage t by keeping only the branch which agrees with the
controls ?0 , ?1 , . . . , ?t?1 . Then a single step reinforcement learning is formulated at time
step t, where the realization of the reward which follows action a ? A at stage t is the immediate reward obtained at the state which follows action a plus the sum of rewards which
are accumulated along the branch which agrees with the controls ?t+1 , ?t+2 , . . . , ?T ?1 .
The iterations end after the first complete cycle with no parameter modifications.
Note that when updating ?t , each tree contributes one realization of the state at time t. A
result of the pruning process is that the ensemble of state realization are drawn from the
distribution induced by the policy up to time t ? 1. In other words, the algorithm relaxes
the requirement in [2] to have access to a baseline distribution - a distribution over the
states that is induced by a good policy. Our algorithm automatically generates samples
from distributions that are induced by a sequence of monotonically improving policies.
Figure 2: Updating ?1 . In the example: pruning down according to ?0 (S0 ) = 0, propagating rewards up according to ?2 (S2 |00) = 1, and ?2 (S2 |01) = 0.
Proposition 2 The algorithm converges after a finite number of iterations to a policy that
cannot be further improved by changing one of the controls and holding the rest fixed.
Proof 2 Writing the empirical average sum of rewards Vbn (?) explicitly as
?
?
X
Vbn (?) = En
I(?0 (S 0 ) = i0 )I(?1 (S 1 |i0 ) = i1 ) . . .
?
T
i0 ,...,iT ?1 ?A
I(?T ?1 (S
T ?1 0
1
T ?2
|i , i , . . . , i
T ?1
)=i
)
T
X
t=1
t 0
1
t?1
r(S |i , i , . . . , i
)
) ,
it can be seen that the algorithm is a Gauss-Seidel algorithm for maximizing Vbn (?), where,
at each iteration, optimization of ?t is carried out at one of the stages t while keeping
?t? , t? 6= t fixed. At each iteration the previous control is a valid solution and hence the
objective function is non decreasing. Since Vbn (?) is evaluated using a finite number of
trees, it can take only a finite set of values. Therefore, we must reach a cycle with no
updates after a finite number of iterations. A cycle with no improvements implies that we
cannot increase the empirical average sum of rewards by updating one of the ?t ?s.
5
Initialization
There are two possible initial policies that can be extracted from the set of trajectory trees.
One possible initial policy is the myopic policy which is computed from the root of the tree
downwards. Staring from the root, ?0 is found by solving the single stage reinforcement
learning resulting from taking into account only the immediate reward at the next state.
Once the weighted classification problem is solved the trees are pruned by following the
action which agrees with ?0 . The remaining realizations of state S1 follow the distribution
induced by the myopic control of the first stage. The process is continued to stage T ? 1.
The second possible initial policy is computed from the leaves backward to the root. Note
that the distribution of the state at a leaf that is chosen at random is the distribution of
the state when a randomized policy is used. Therefore, to find the best control at stage
T ? 1, given that the previous T ? 2 controls choose random actions, we solve the weighted
classification problem induced by considering all the realization of the state ST ?1 from all
the trees (these are not independent observations) or choose randomly one realization from
each tree (these are independent realizations). Given the classifier, we use the equivalent
control ?T ?1 to propagated the rewards up to the previous stage and solve the resulting
weighted classification problem. This is carried out recursively up to the root of the tree.
6
Extensions
The results presented in this paper generalize to the non-Markovian setting as well. In
particular, when the state space, action space, and the reward function depend on time, and
the distribution over the next state depends on all past states and actions, we will be dealing
with non-stationary deterministic policies ? = (?0 , ?1 , . . . , ?T ?1 ); ?t : S0 ? A0 ? . . . ?
St?1 ? At?1 ? St ? At , t = 0, 1, . . . , T ? 1. POMDPs can be dealt with in terms of
the belief states as a continuous state space MDP or as a non-Markovian process in which
policies depend directly on all past observations.
While we focused on the trajectory tree method, the algorithm can be easily modified to
solve the optimization problem associated with the random trajectory method [5] by adjusting the single step reinforcement learning reduction and the pruning method presented
here.
7
Illustrative Example
The following example illustrates the aspects of the problem and the components of our solution. The simulated system is a two-step MDP, with continuous state space S = [0, 1] and
a binary action space A = {0, 1}. The distribution over the initial state is uniform. Given
state s and action a the next state s? is generated by s? = mod(s + 0.33a + 0.1randn, 1),
where mod(x, 1) is the fraction part of x, and randn is a Gaussian random variable independent of the other variables in the problem. The reward function is r(s) = s sin(?s). We
consider a class of policies parameterized by a continuous parameter: ? = {?(?; ?)|? =
(?0 , ?1 ) ? [0, 2]2 }, where ?i (s; ?i ) = 1 when ?i ? 1 and s > ?i or when ?i > 1 and
s < ?i ? 1 and zero otherwise, i = 0, 1.
In Figure 3 the objective function Vbn (?(?)), estimated from n = 20 trees, is presented as a
function of ?0 and ?1 . The path taken by the algorithm supperimposed on the contour plot
of Vbn (?(?)) is also presented. Starting from the arbitrary point 0, the algorithm performs
optimization with respect to one of the coordinates at a time and converges after 3 iterations.
2
1.8
1
1.6
0.9
1.4
0.8
1
2
1.2
0.7
?
1
0.6
1
0.5
3
0.8
0.4
0.6
0.3
2
0
0.4
2
1.5
1.5
1
0.2
1
0.5
?1
0.5
0
0
0
?0
0
0.2
0.4
0.6
0.8
1
?0
1.2
1.4
1.6
1.8
2
Figure 3: The objective function Vbn (?(?)) and the path taken by the algorithm.
References
[1] N. Abe, B. Zadrozny, and J. Langford. An iterative method for multi-class cost-sensitive learning. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 3?11, 2004.
[2] J. Bagnell, S. Kakade, A. Ng, and J. Schneider. Policy search by dynamic programming. In
Advances in Neural Information Processing Systems, volume 16. MIT Press, 2003.
[3] A. G. Barto and T. G. Dietterich. Reinforcement learning and its relationship to supervised
learning. In J. Si, A. Barto, W. Powell, and D. Wunsch, editors, Handbook of learning and
approximate dynamic programming. John Wiley and Sons, Inc, 2004.
[4] A. Fern, S. Yoon, and R. Givan. Approximate policy iteration with a policy language bias. In
Advances in Neural Information Processing Systems, volume 16, 2003.
[5] M. Kearns, Y. Mansour, and A. Ng. Approximate planning in large POMDPs via reusable
trajectories. In Advances in Neural Information Processing Systems, volume 12. MIT Press,
2000.
[6] M. Lagoudakis and R. Parr. Reinforcement learning as classification: Leveraging modern classifiers. In Proceedings of the Twentieth International Conference on Machine Learning, 2003.
[7] J. Langford and A. Beygelzimer. Sensitive error correcting output codes. In Proceedings of the
18th Annual Conference on Learning Theory, pages 158?172, 2005.
[8] J. Langford and B. Zadrozny. Reducing T-step reinforcement learning to classification.
http://hunch.net/?jl/projects/reductions/reductions.html, 2003.
[9] J. Langford and B. Zadrozny. Relating reinforcement learning performance to classification performance. In Proceedings of the Twenty Second International Conference on Machine Learning,
pages 473?480, 2005.
[10] M. L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John
Wiley & Sons, Inc, 1994.
| 2778 |@word recursively:1 reduction:10 initial:17 cyclic:1 past:2 current:1 beygelzimer:1 si:1 assigning:1 written:1 must:1 john:2 plot:1 update:3 stationary:6 generative:3 leaf:2 node:1 simpler:1 si1:15 along:3 constructed:2 manner:2 expected:1 planning:1 multi:1 decreasing:1 automatically:1 overwhelming:1 considering:1 provided:1 project:1 notation:1 bounded:1 maximizes:4 mass:1 minimizes:2 pursue:1 finding:4 guarantee:1 tackle:1 classifier:11 control:15 positive:1 engineering:2 path:5 approximately:1 plus:1 therein:1 initialization:1 specifying:1 averaged:1 unique:1 lost:1 block:1 practice:1 powell:1 empirical:8 word:1 cannot:4 context:1 applying:3 writing:1 equivalent:6 deterministic:7 map:2 maximizing:4 regardless:1 starting:1 focused:1 correcting:1 continued:1 wunsch:1 coordinate:1 exact:3 programming:4 hunch:1 element:2 updating:4 observed:1 yoon:1 electrical:2 solved:6 region:7 cycle:3 reward:23 multistage:1 dynamic:4 depend:3 solving:3 easily:1 monte:1 whose:2 solve:3 otherwise:2 favor:2 sequence:12 net:1 maximal:3 realization:20 description:1 asserts:1 ent:1 optimum:1 plethora:1 p:5 requirement:1 converges:5 propagating:1 implies:2 differ:1 correct:1 stochastic:2 vc:1 generalization:1 givan:1 proposition:5 extension:1 randn:2 considered:1 mapping:3 parr:1 substituting:1 adopt:1 label:3 si0:2 sensitive:3 agrees:4 tool:1 weighted:21 mit:2 always:3 gaussian:1 modified:1 rather:2 barto:2 derived:1 improvement:1 nonmarkovian:1 sigkdd:1 baseline:1 dependent:1 i0:5 accumulated:1 a0:1 i1:3 arg:13 classification:27 html:1 denoted:2 augment:1 proposes:1 once:2 ng:3 sampling:1 future:2 modern:1 randomly:1 interest:1 mining:1 myopic:2 implication:1 necessary:1 tree:23 re:1 theoretical:1 increased:1 markovian:2 maximization:1 cost:8 uniform:1 eec:2 st:7 international:3 randomized:1 off:1 choose:2 possibly:1 account:1 converted:3 inc:2 explicitly:2 depends:2 break:1 root:7 staring:1 blatt:1 ensemble:1 generalize:1 dealt:1 vbn:13 fern:1 carlo:1 trajectory:12 pomdps:3 reach:1 associated:2 mi:2 proof:2 propagated:1 adjusting:1 knowledge:1 higher:1 supervised:5 follow:1 specify:1 improved:3 formulation:1 evaluated:2 generality:1 stage:11 langford:4 hand:1 maximizer:2 mdp:5 building:1 dietterich:1 contain:1 true:1 hence:2 puterman:1 sin:1 self:1 illustrative:1 trying:1 complete:1 performs:1 lagoudakis:1 volume:3 jl:1 relating:1 language:1 access:1 optimizing:2 certain:2 binary:4 continue:1 seen:2 additional:1 schneider:1 monotonically:1 branch:2 multiple:1 reduces:1 seidel:2 regression:1 expectation:2 iteration:9 addition:1 rest:3 induced:7 leveraging:1 mod:2 call:3 leverage:1 easy:1 relaxes:1 fit:1 granted:1 action:22 reduced:1 generate:2 specifies:2 http:1 misclassifying:1 estimated:4 discrete:1 reusable:1 drawn:1 changing:2 tenth:1 backward:1 fraction:1 convert:3 sum:5 parameterized:1 vn:1 decision:6 vb:3 bound:4 followed:1 annual:1 generates:1 aspect:1 argument:3 min:2 pruned:2 department:2 according:6 son:2 kakade:1 making:1 s1:57 modification:1 restricted:2 taken:6 hero:2 end:1 umich:2 remaining:1 include:1 giving:1 objective:3 dependence:1 bagnell:1 simulated:1 code:1 relationship:1 holding:3 subproblems:2 negative:1 motivates:1 policy:59 unknown:1 perform:2 twenty:1 observation:3 markov:2 finite:8 zadrozny:3 immediate:2 maxk:5 incorporated:1 mansour:2 arbitrary:2 abe:1 pair:1 componentwise:2 omits:1 usually:1 built:1 max:18 belief:1 misclassification:2 indicator:1 mdps:1 carried:2 extract:1 discovery:1 generator:2 incurred:1 sufficient:1 s0:25 editor:1 classifying:1 last:1 keeping:2 bias:1 taking:3 depth:1 dimension:1 transition:2 valid:1 contour:1 author:1 reinforcement:20 approximate:5 pruning:3 dealing:1 global:1 handbook:1 assumed:1 alternatively:1 search:4 continuous:5 iterative:1 contributes:1 improving:1 complex:1 s2:2 en:14 downwards:1 wiley:2 third:1 down:1 importance:1 subtree:1 conditioned:1 illustrates:1 horizon:1 michigan:2 lt:1 simply:1 led:1 twentieth:1 corresponds:1 extracted:1 acm:1 conditional:3 goal:2 formulated:2 ann:2 infinite:1 reducing:2 uniformly:1 averaging:2 kearns:2 called:1 accepted:1 arbor:2 gauss:2 pfor:1 |
1,958 | 2,779 | Generalization Error Bounds for Aggregation by
Mirror Descent with Averaging
Anatoli Juditsky
Laboratoire de Mod?elisation et Calcul - Universit?e Grenoble I
B.P. 53, 38041 Grenoble, France
[email protected]
Alexander Nazin
Institute of Control Sciences - Russian Academy of Science
65, Profsoyuznaya str., GSP-7, Moscow, 117997, Russia
[email protected]
Alexandre Tsybakov
Laboratoire de Probabilit?es et Mod`eles Al?eatoires - Universit?e Paris VI
4, place Jussieu, 75252 Paris Cedex, France
[email protected]
Nicolas Vayatis
Laboratoire de Probabilit?es et Mod`eles Al?eatoires - Universit?e Paris VI
4, place Jussieu, 75252 Paris Cedex, France
[email protected]
Abstract
We consider the problem of constructing an aggregated estimator from
a finite class of base functions which approximately minimizes a convex risk functional under the ?1 constraint. For this purpose, we propose
a stochastic procedure, the mirror descent, which performs gradient descent in the dual space. The generated estimates are additionally averaged in a recursive fashion with specific weights. Mirror descent algorithms have been developed in different contexts and they are known to
be particularly efficient in high dimensional problems. Moreover their
implementation is adapted to the online setting. The main result of the
paper is the upper bound on the convergence rate for the generalization
error.
1
Introduction
We consider the aggregation problem (cf. [16]) where we have at hand a finite class of M
predictors which are to be combined linearly under an ?1 constraint k?k1 = ? on the vector ? ? RM that determines the coefficients of the linear combination. In order to exhibit
such a combination, we focus on the strategy of penalized convex risk minimization which
is motivated by recent statistical studies of boosting and SVM algorithms [11, 14, 18].
Moreover, we take a stochastic approximation approach which is particularly relevant in
the online setting since it leads to recursive algorithms where the update uses a single data
observation per iteration step. In this paper, we consider a general setting for which we
propose a novel stochastic gradient algorithm and show tight upper bounds on its expected
accuracy. Our algorithm builds on the ideas of mirror descent methods, first introduced by
Nemirovski and Yudin [12], which consider updates of the gradient in the dual space. The
mirror descent algorithm has been successfully applied in high dimensional problems both
in deterministic and stochastic settings [2, 7]. In the present work, we describe a particular instance of the algorithm with an entropy-like proxy function. This method presents
similarities with the exponentiated gradient descent algorithm which was derived under different motivations in [10]. A crucial distinction between the two is the additional averaging
step in our version which guarantees statistical performance. The idea of averaging recursive procedures is well-known (see e.g. [13] and the references therein) and it has been
invoked recently by Zhang [19] for the standard stochastic gradient descent (taking place
in the initial parameter space). Also it is worth noticing that most of the existing online
methods are evaluated in terms of relative loss bounds which are related to the empirical
risk while we focus on generalization error bounds (see [4, 5, 10] for insights on connections between the two types of criteria). The rest of the paper is organized as follows.
We first introduce the setup (Section 2), then we describe the algorithm and state the main
convergence result (Section 3). Further we provide the intuition underlying the proposed
algorithm, and compare it to other methods (Section 4). We end up with a technical section
dedicated to the proof of our main result (Section 5).
2
Setup and notations
Let Z be a random variable with values in a measurable space (Z, A). We set a parameter
? > 0, and an integer M ? 2. The unknown parameter is a vector ? ? RM which is
compelled to stay in the decision set ? = ?M,? defined by:
XM
(1)
(M ) T
M
(i)
?M,? = ? = (? , . . . , ?
) ? R+ :
? =? .
(1)
i=1
Now we introduce the loss function Q : ? ? Z ? R+ such that the random function
Q(? , Z) : ? ? R+ is convex for almost all Z and define the convex risk function A : ? ?
R+ to be minimized as follows:
A(?) = E Q(?, Z) .
(2)
Assume a training sample is given in the form of a sequence (Z1 , . . . , Zt?1 ), where each
Zi has the same distribution as Z. We assume for simplicity that the training sequence is
i.i.d. though this assumption can be weakened.
We propose to minimize the convex target function A over the decision set ? on the basis
of the stochastic sub-gradients of Q:
ui (?) = ?? Q(?, Zi ) ,
i = 1, 2, . . . ,
(3)
Note that the expectations E ui (?) belong to the sub-differential of A(?).
In the sequel, we will characterize the accuracy of an estimate ?bt = ?bt (Z1 , . . . , Zt?1 ) ? ?
of the minimizer of A by the excess risk:
E A(?bt ) ? min A(?)
???
where the expectation is taken over the sample (Z1 , . . . , Zt?1 ).
(4)
We now introduce the notation that is necessary to present the algorithm in the next section.
T
For a vector z = z (1) , . . . , z (M ) ? RM , define the norms
XM
def
def
kzk1 =
|z (j) | , kzk? = max z T ? = max |z (j) | .
j=1
j=1,...,M
k?k1 =1
The space RM equipped with the norm k ? k1 is called the primal space E and the same
space equipped with the dual norm k ? k? is called the dual space E ? .
Introduce a so-called entropic proxy function:
? ? ? ?,
V (?) = ? ln (M/?) +
XM
j=1
?(j) ln ?(j) ,
(5)
which has its minimum at ?0 = (?/M, . . . , ?/M )T . It is easy to check that this function is
?-strongly convex with respect to the norm k ? k1 with parameter ? = 1/? , i.e.,
?
(6)
V (sx + (1 ? s)y) ? sV (x) + (1 ? s)V (y) ? s(1 ? s)kx ? yk21
2
for all x, y ? ? and any s ? [0, 1].
Let ? > 0 be a parameter. We call ?-conjugate of V the following convex transform:
def
? z ? RM , W? (z) = sup ?z T ? ? ?V (?) .
???
As it straightforwardly follows from (5), the ?-conjugate is given here by:
X
M
1
?z (k) /?
W? (z) = ? ? ln
e
, ? z ? RM ,
k=1
M
(7)
which has a Lipschitz-continuous gradient w.r.t. k ? k1 , namely,
k?W? (z) ? ?W? ( z? )k1 ?
?
kz ? z?k? ,
?
? z, z? ? RM .
(8)
Though we will focus on a particular algorithm based on the entropic proxy function, our
results apply for a generic algorithmic scheme which takes advantage of the general properties of convex transforms (see [8] for details). The key property in the proof is the inequality
(8).
3
Algorithm and main result
The mirror descent algorithm is a stochastic gradient algorithm in the dual space. At each
iteration i, a new data point (Xi , Yi ) is observed and there are two updates: one is the value
?i as the result of the stochastic gradient descent in the dual space, the other is the update of
the parameter ?i which is the ?mirror image? of ?i . In order to tune the algorithm properly,
we need two fixed positive sequences (?i )i?1 (stepsize) and (?i )i?1 (temperature) such
that ?i ? ?i?1 . The mirror descent algorithm with averaging is as follows:
Algorithm.
? Fix the initial values ?0 ? ? and ?0 = 0 ? RM .
? For i = 1, . . . , t ? 1, do
?i
= ?i?1 + ?i ui (?i?1 ) ,
?i
= ??W?i (?i ) .
(9)
? Output at iteration t the following convex combination:
.X t
Xt
??t =
?i ?i?1
?j .
i=1
j=1
(10)
At this point, we actually have described a class of algorithms. Given the observations of
the stochastic sub-gradient (3), particular choices of the proxy function V , of the stepsize
and temperature parameters, will determine the algorithm completely. We discuss these
choices with more details in [8]. In this paper, we focus on the entropic proxy function and
consider a nearly optimal choice for the stepsize and temperature parameters which is the
following:
?
(11)
?i ? 1 , ?i = ?0 i + 1 , i = 1, 2, . . . , ?0 > 0 .
We can now state our rate of convergence result.
Theorem. Assume that the loss function Q satisfies the following boundedness condition:
?
Fix also ?0 = L/ ln M .
sup E k?? Q(?, Z)k2? ? L2 < ? .
(12)
???
Then, for any integer t ? 1, the excess risk of the estimate ?bt described above satisfies the
following bound:
?
t+1
1/2
b
E A(?t ) ? min A(?) ? 2 L? ( ln M )
.
(13)
???
t
Example. Consider the setting of supervised learning where the data are modelled by a
pair (X, Y ) with X ? X being an observation vector and Y a label, either integer (classification) or real-valued (regression). Boosting and SVM algorithms are related to the
minimization of a functional
R(f ) = E?(Y f (X))
where ? is a convex non-negative cost function (typically exponential, logit or hinge loss)
and f belongs to a given class of combined predictors. The aggregation problem consists in
finding the best linear combination of elements from a finite set of predictors {h1 , . . . , hM }
with hj : X ? [?K, K]. Taking compact notations, it means that we search for f of the
form f = ?T H with H denoting the vector-valued function whose components are these
base predictors:
T
H(x) = (h1 (x), . . . , hM (x)) ,
and ? belonging in a decision set ? = ?M,? . Take for instance ? to be non-increasing.
It is easy to see that this problem can be interpreted in terms of our general setting with
Z = (X, Y ), Q(Z, ?) = ?(Y ?T H(X)) and L = K?? (K?).
4
Discussion
In this section, we provide some insights on the method and the result of the previous
section.
4.1
Heuristics
Suppose that we want to minimize a convex function ? 7? A(?) over a convex set ?. If
?0 , . . . , ?t?1 are the available search points at iteration t, we can provide the affine approximations ?i of the function A defined, for ? ? ?, by
?i (?) = A(?i?1 ) + (? ? ?i?1 )T ?A(?i?1 ),
i = 1, . . . , t .
Here ? 7? ?A(?) is a vector function belonging to the sub-gradient of A(?). Taking a
convex combination of the ?i ?s, we obtain an averaged approximation of A(?):
Pt
T
i=1 ?i A(?i?1 ) + (? ? ?i?1 ) ?A(?i?1 )
?
.
?t (?) =
Pt
i=1 ?i
At first glance, it would seem reasonable to choose as the next search point a vector ? ? ?
minimizing the approximation ??t , i.e.,
!
t
X
T
?
?t = arg min ?t (?) = arg min ?
?i ?A(?i?1 ) .
(14)
???
???
i=1
However, this does not make any progress, because our approximation is ?good? only in
the vicinity of search points ?0 , . . . , ?t?1 . Therefore, it is necessary to modify the criterion,
for instance, by adding a special penalty Bt (?, ?t?1 ) to the target function in order to keep
the next search point ?t in the desired region. Thus, one chooses the point:
"
!
#
t
X
T
?t = arg min ?
?i ?A(?i?1 ) + Bt (?, ?t?1 ) .
(15)
???
i=1
Our algorithm corresponds to a specific type of penalty Bt (?, ?t?1 ) = ?t V (?), where
V is the proxy function. Also note that in our problem the vector-function ?A(?) is not
available. Therefore, we replace in (15) the unknown gradients ?A(?i?1 ) by the observed
stochastic sub-gradients ui (?i?1 ). This yields a new definition of the t-th search point:
"
!
#
t
X
T
?t = arg min ?
?i ui (?i?1 ) + ?t V (?) = arg max ??tT ? ? ?t V (?) , (16)
???
Pt
???
i=1
where ?t = i=1 ?i ui (?i?1 ). By a standard result of convex analysis (see e.g. [3]), the
solution to this problem reads as ??W?t (?t ) and it is now easy to deduce the iterative
scheme (9) of the mirror descent algorithm.
4.2
Comparison with previous work
The versions of mirror descent method proposed in [12] are somewhat different from our
iterative scheme (9). One of them, closest to ours, is studied in detail in [3]. It is based on
the recursive relation
?i = ??W1 ? ?V (?i?1 ) + ?i ui (?i?1 ) , i = 1, 2, . . . ,
(17)
where the function V is strongly convex with respect to the norm of initial space E (which
is not necessarily the space ?M
1 ) and W1 is the 1-conjugate function to V .
If ? = RM and V (?) =
method.
1
2
2 k?k2 ,
the scheme of (17) coincides with the ordinary gradient
For the unit simplex ? = ?M,1 and the entropy type proxy function V from (5) with
(j)
? = 1, the coordinates ?i of vector ?i from (17) are:
!
i
X
(j)
?0 exp ?
?m um, j (?m?1 )
?j = 1, . . . , M,
(j)
?i
=
M
X
k=1
(k)
?0
m=1
i
X
exp ?
m=1
!.
(18)
?m um, k (?m?1 )
The algorithm is also known as the exponentiated gradient (EG) method [10]. The differences between the algorithm (17) and ours are the following:
? the initial iterative scheme of the Algorithm is different than that of (17), particularly, it includes the second tuning parameter ?i ; moreover, the algorithm (18)
uses initial value ?0 in a different manner;
? our algorithm contains the additional averaging step of the updates (10).
The convergence properties of the EG method (18) have been studied in a deterministic setting [6]. Namely, it has been shown that, under some assumptions, the difference
At (?t ) ? min???M,1 At (?), where At is the empirical risk, is bounded by a constant depending on M and t. If this constant is small enough, these results show that the EG method
provides good numerical minimizers of the empirical risk At . The averaging step allows
the use of the results provided in [5] to derive generalization error bounds
p from relative loss
bounds. This technique leads to rates of convergence of the order (ln M )/t as well but
with suboptimal multiplicative factor in ?.
Finally, we point out that the algorithm (17) may be deduced from the ideas mentioned in
Subsection 4.1 and which are studied in the literature on proximal methods within the field
of convex optimization (see, e.g., [9, 1] and the references therein). Namely, under rather
general conditions, the variable ?i from (17) solves the the minimization problem
(19)
?i = arg min ?T ?i ui (?i?1 ) + B(?, ?i?1 ) ,
???
where the penalty B(?, ?i?1 ) = V (?) ? V (?i?1 ) ? (? ? ?i?1 )T ?V (?i?1 ) represents the
Bregman divergence between ? and ?i?1 related to the function V .
4.3
General comments
?
?
Performance and efficiency. The rate of convergence of order ln M / t is typical without low noise assumptions (as they are introduced in [17]). Batch procedures based on
minimization of the empirical convex risk functional present a similar rate. From the statistical point of view, there is no remarkable difference between batch and our mirror-descent
procedure. On the other hand, from the computational point of view, our procedure is quite
comparable with the direct stochastic gradient descent. However, the mirror-descent algorithm presents two major advantages as compared both to batch and to direct stochastic gradient: (i) its behavior with respect to the cardinality
of the base class is better than for direct
?
?
stochastic gradient descent (of the order of ln M in the Theorem, instead of M or M
for direct stochastic gradient); (ii) mirror-descent presents a higher efficiency especially in
high-dimensional problems as its algorithmic complexity and memory requirements are of
strictly smaller order than for corresponding batch procedures (see [7] for a comparison).
Optimality of the rate of convergence. Using the techniques of [7] and [16] it is not hard
to prove minimax lower bound on the excess risk E A(?bt ) ? min???M,? A(?) having the
?
order (ln M )1/2 / t for M ? t1/2+? with some ? > 0. This indicates that the upper bound
of the Theorem is rate optimal for such values of M .
Choice of the base class. We point out that the good behaviour of this method crucially relies on the choice of the base class of functions {hj }1?j?M . As far as theory is concerned,
in order to provide a complete statistical analysis, one should establish approximation error
bounds on the quantity inf f ?FM,? A(f ) ? inf f A(f ) showing that the richness of the base
class is reflected both by diversity (orthogonality or independence) of the hj ?s and by its
cardinality M . For example, one can take hj ?s as the eigenfunctions associated to some
positive definite kernel. We refer to [14], [15], for related results. The choice of ? can be
motivated by similar considerations. In fact, to minimize the approximation error it might
be useful to take ? depending on the sample size t and tending to infinity with some slow
rate as in [11]. A balance between the stochastic error as given in the Theorem and the
approximation error would then determine the optimal choice of ?.
5
Proof of the Theorem
Introduce the notation ?A(?) = Eui (?) and ?i (?) = ui (?) ? ?A(?). Put vi = ui (?i?1 )
which gives ?i ??i?1 = ?i vi . By continuous differentiability of W?t?1 and by (8) we have:
W?i?1 (?i )
= W?i?1 (?i?1 ) + ?i viT ?W?i?1 (?i?1 )
Z 1
+?i
viT ?W?i?1 (? ?i + (1 ? ? )?i?1 ) ? ?W?i?1 (?i?1 ) d?
0
? W?i?1 (?i?1 ) + ?i viT ?W?i?1 (?i?1 ) +
??i2 kvi k2?
.
2?i?1
Then, using the fact that (?i )i?1 is a non-decreasing sequence and that, for z fixed, ? 7?
W? (z) is a non-increasing function, we get
T
W?i (?i ) ? W?i?1 (?i ) ? W?i?1 (?i?1 ) ? ?i ?i?1
vi +
Summing up over the i?s and using the representation ?t =
?? ? ?,
Xt
i=1
Pt
i=1
?i (?i?1 ? ?)T vi ? ?W?t (?t ) ? ?tT ? +
??i2 kvi k2?
.
2?i?1
?i vi , we get:
Xt
i=1
??i2 kvi k2?
2?i?1
since W?0 (?0 ) = 0. From definition of W? , we have, ? ? ? RM and ? ? ? ?, ?W?t (?) ?
? T ? ? ?t V (?). Finally, since vi = ?A(?i?1 ) + ?i (?i?1 ), we get
t
X
i=1
?i (?i?1 ? ?)T ?A(?i?1 ) ? ?t V (?) ?
t
X
i=1
?i (?i?1 ? ?)T ?i (?i?1 ) +
t
X
?? 2 kvi k2
i
i=1
?
2?i?1
.
As we are to take expectations, we note that, conditioning on ?i?1 and
using the independence between ?i?1 and (Xi , Yi ), we have: E (?i?1 ? ?)T ?i (?i?1 ) = 0. Now, convexity
of A and the previous display lead to:
Pt
?i E [(?i?1 ? ?)T ?A(?i?1 )]
b
? ? ? ? , E A(?t ) ? A(?) ? i=1
Pt
i=1 ?i
t
1X
E [(?i?1 ? ?)T ?A(?i?1 )]
t i=1
?
t+1
?L2
?
?
?0 V +
,
t
?0
=
where we have set V ? = max??? V (?) and made use of the boundedness assumption
E kui (?)k2? ? L2 and of the particular choice for the stepsize and temperature parameters.
Noticing that V ? = ? ln M and optimizing this bound in ?0 > 0, we obtain the result.
Acknowledgments
We thank Nicol`o Cesa-Bianchi for sharing with us his expertise on relative loss bounds.
References
[1] Beck, A. & Teboulle, M. (2003) Mirror descent and nonlinear projected subgradient
methods for convex optimization. Operations Research Letters, 31:167?175.
[2] Ben-Tal, A., Margalit, T. & Nemirovski, A. (2001) The Ordered Subsets Mirror Descent optimization method and its use for the Positron Emission Tomography reconstruction problem. SIAM J. on Optimization, 12:79?108.
[3] Ben-Tal, A. & Nemirovski, A.S. (1999) The conjugate barrier mirror descent method
for non-smooth convex optimization. MINERVA Optimization Center Report, Technion Institute of Technology.
Available at http://iew3.technion.ac.il/Labs/Opt/opt/Pap/CP MD.pdf
[4] Cesa-Bianchi, N. & Gentile, C. (2005) Improved risk tail bounds for on-line algorithms. Submitted.
[5] Cesa-Bianchi, N., Conconi, A. & Gentile, C. (2004) On the generalization ability of
on-line learning algorithms. IEEE Transactions on Information Theory, 50(9):2050?
2057.
[6] Helmbold, D.P., Kivinen, J. & Warmuth, M.K. (1999) Relative loss bounds for single
neurons. IEEE Trans. on Neural Networks, 10(6):1291?1304.
[7] Juditsky, A. & Nemirovski, A. (2000) Functional aggregation for nonparametric estimation. Annals of Statistics, 28(3): 681?712.
[8] Juditsky, A.B., Nazin, A.V., Tsybakov, A.B. & Vayatis N. (2005) Recursive Aggregation of Estimators via the Mirror Descent Algorithm with Averaging. Technical
Report LPMA, Universit?e Paris 6.
Available at http://www.proba.jussieu.fr/pageperso/vayatis/publication.html
[9] Kiwiel, K.C. (1997) Proximal minimization methods with generalized Bregman functions. SIAM J. Control Optim., 35:1142?1168.
[10] Kivinen J. & Warmuth M.K. (1997) Additive versus exponentiated gradient updates
for linear prediction. Information and Computation, Vol.132(1): 1?64.
[11] Lugosi, G. & Vayatis, N. (2004) On the Bayes-risk consistency of regularized boosting methods (with discussion). Annals of Statitics, 32(1): 30?55.
[12] Nemirovski, A.S. & Yudin, D.B. (1983) Problem Complexity and Method Efficiency
in Optimization. Wiley-Interscience.
[13] Polyak, B.T. & Juditsky, A.B. (1992) Acceleration of stochastic approximation by
averaging. SIAM J. Control Optim., 30:838?855.
[14] Scovel, J.C. & Steinwart, I. (2005) Fast Rates for Support Vector Machines. In Proceedings of the 18th Conference on Learning Theory (COLT 2005), Bertinoro, Italy.
[15] Tarigan, B. & van de Geer, S. (2004) Adaptivity of Support Vector Machines with ?1
Penalty. Preprint, University of Leiden.
[16] Tsybakov, A. (2003) Optimal Rates of Aggregation. Proceedings of COLT?03, LNCS,
Springer, Vol. 2777:303?313.
[17] Tsybakov, A. (2004) Optimal aggregation of classifiers in statistical learning. Annals
of Statistics, 32(1):135?166.
[18] Zhang, T. (2004) Statistical behavior and consistency of classification methods based
on convex risk minimization (with discussion). Annals of Statistics, 32(1):56?85.
[19] Zhang, T. (2004) Solving large scale linear prediction problems using stochastic gradient descent algorithms. In Proceedings of ICML?04.
| 2779 |@word version:2 norm:5 logit:1 crucially:1 boundedness:2 initial:5 contains:1 denoting:1 ours:2 existing:1 scovel:1 optim:2 numerical:1 additive:1 update:6 juditsky:4 warmuth:2 compelled:1 positron:1 provides:1 boosting:3 zhang:3 direct:4 differential:1 consists:1 prove:1 interscience:1 manner:1 kiwiel:1 introduce:5 expected:1 behavior:2 decreasing:1 str:1 equipped:2 increasing:2 cardinality:2 provided:1 moreover:3 underlying:1 notation:4 bounded:1 interpreted:1 minimizes:1 developed:1 finding:1 guarantee:1 universit:4 rm:10 k2:7 um:2 control:3 unit:1 imag:1 classifier:1 positive:2 t1:1 modify:1 approximately:1 lugosi:1 might:1 therein:2 weakened:1 studied:3 nemirovski:5 averaged:2 acknowledgment:1 recursive:5 definite:1 procedure:6 lncs:1 probabilit:2 empirical:4 get:3 put:1 risk:13 context:1 www:1 measurable:1 deterministic:2 center:1 vit:3 convex:20 simplicity:1 helmbold:1 estimator:2 insight:2 his:1 coordinate:1 annals:4 target:2 suppose:1 pt:6 us:2 element:1 particularly:3 observed:2 preprint:1 eles:2 region:1 richness:1 mentioned:1 intuition:1 convexity:1 ui:10 complexity:2 tight:1 solving:1 efficiency:3 basis:1 completely:1 fast:1 describe:2 whose:1 heuristic:1 quite:1 valued:2 ability:1 statistic:3 eatoires:2 transform:1 online:3 sequence:4 advantage:2 propose:3 reconstruction:1 fr:4 relevant:1 academy:1 convergence:7 requirement:1 ben:2 depending:2 derive:1 ac:1 progress:1 solves:1 stochastic:17 behaviour:1 fix:2 generalization:5 opt:2 strictly:1 exp:2 algorithmic:2 major:1 entropic:3 purpose:1 estimation:1 label:1 successfully:1 minimization:6 rather:1 hj:4 publication:1 derived:1 focus:4 emission:1 properly:1 check:1 indicates:1 minimizers:1 bt:8 typically:1 margalit:1 relation:1 france:3 arg:6 dual:6 classification:2 html:1 colt:2 special:1 field:1 having:1 represents:1 icml:1 nearly:1 minimized:1 simplex:1 report:2 grenoble:2 bertinoro:1 divergence:1 beck:1 proba:1 primal:1 bregman:2 necessary:2 desired:1 instance:3 teboulle:1 statitics:1 ordinary:1 cost:1 subset:1 predictor:4 technion:2 characterize:1 straightforwardly:1 sv:1 proximal:2 combined:2 chooses:1 deduced:1 siam:3 stay:1 sequel:1 w1:2 cesa:3 choose:1 russia:1 de:4 diversity:1 includes:1 coefficient:1 vi:8 multiplicative:1 h1:2 view:2 lab:1 sup:2 aggregation:7 bayes:1 minimize:3 il:1 accuracy:2 yield:1 modelled:1 worth:1 expertise:1 submitted:1 sharing:1 definition:2 proof:3 associated:1 subsection:1 organized:1 actually:1 alexandre:1 higher:1 supervised:1 reflected:1 improved:1 evaluated:1 though:2 strongly:2 hand:2 steinwart:1 nonlinear:1 glance:1 russian:1 vicinity:1 read:1 i2:3 eg:3 coincides:1 criterion:2 generalized:1 pdf:1 tt:2 complete:1 performs:1 dedicated:1 temperature:4 cp:1 image:1 invoked:1 novel:1 recently:1 consideration:1 tending:1 functional:4 conditioning:1 belong:1 tail:1 refer:1 tuning:1 consistency:2 similarity:1 deduce:1 base:6 closest:1 recent:1 italy:1 inf:2 belongs:1 optimizing:1 inequality:1 yi:2 minimum:1 additional:2 somewhat:1 gentile:2 determine:2 eui:1 aggregated:1 ii:1 smooth:1 technical:2 prediction:2 regression:1 expectation:3 minerva:1 iteration:4 kernel:1 vayatis:5 want:1 laboratoire:3 crucial:1 rest:1 eigenfunctions:1 cedex:2 comment:1 mod:3 seem:1 integer:3 call:1 easy:3 enough:1 concerned:1 independence:2 zi:2 fm:1 suboptimal:1 polyak:1 idea:3 motivated:2 penalty:4 useful:1 tune:1 transforms:1 nonparametric:1 tsybakov:5 elisation:1 tomography:1 differentiability:1 http:2 per:1 ccr:2 vol:2 key:1 subgradient:1 pap:1 noticing:2 letter:1 place:3 almost:1 reasonable:1 decision:3 comparable:1 bound:15 def:3 display:1 adapted:1 constraint:2 orthogonality:1 infinity:1 tal:2 min:9 optimality:1 combination:5 conjugate:4 belonging:2 smaller:1 taken:1 ln:10 discus:1 end:1 available:4 operation:1 apply:1 generic:1 stepsize:4 batch:4 moscow:1 cf:1 hinge:1 anatoli:2 k1:6 build:1 especially:1 establish:1 quantity:1 strategy:1 md:1 exhibit:1 gradient:21 thank:1 ru:1 minimizing:1 balance:1 setup:2 kzk1:1 negative:1 implementation:1 zt:3 unknown:2 bianchi:3 upper:3 observation:3 neuron:1 finite:3 descent:23 introduced:2 namely:3 paris:5 pair:1 connection:1 z1:3 distinction:1 trans:1 xm:3 max:4 memory:1 regularized:1 kivinen:2 minimax:1 scheme:5 technology:1 ipu:1 hm:2 literature:1 calcul:1 l2:3 nicol:1 relative:4 loss:7 adaptivity:1 gsp:1 versus:1 remarkable:1 leiden:1 affine:1 proxy:7 penalized:1 exponentiated:3 institute:2 taking:3 barrier:1 van:1 kzk:1 yudin:2 kz:1 made:1 projected:1 nazin:2 far:1 transaction:1 excess:3 compact:1 keep:1 summing:1 xi:2 continuous:2 search:6 iterative:3 additionally:1 nicolas:1 kui:1 necessarily:1 constructing:1 main:4 linearly:1 motivation:1 noise:1 fashion:1 slow:1 wiley:1 sub:5 exponential:1 jussieu:5 theorem:5 specific:2 xt:3 showing:1 kvi:4 svm:2 adding:1 mirror:17 sx:1 kx:1 entropy:2 ordered:1 conconi:1 springer:1 corresponds:1 minimizer:1 determines:1 satisfies:2 relies:1 acceleration:1 lipschitz:1 replace:1 hard:1 typical:1 averaging:8 called:3 geer:1 e:2 support:2 alexander:1 |
1,959 | 2,780 | Spectral Bounds for Sparse PCA:
Exact and Greedy Algorithms
Baback Moghaddam
MERL
Cambridge MA, USA
[email protected]
Yair Weiss
Hebrew University
Jerusalem, Israel
[email protected]
Shai Avidan
MERL
Cambridge MA, USA
[email protected]
Abstract
Sparse PCA seeks approximate sparse ?eigenvectors? whose projections
capture the maximal variance of data. As a cardinality-constrained and
non-convex optimization problem, it is NP-hard and is encountered in
a wide range of applied fields, from bio-informatics to finance. Recent
progress has focused mainly on continuous approximation and convex
relaxation of the hard cardinality constraint. In contrast, we consider an
alternative discrete spectral formulation based on variational eigenvalue
bounds and provide an effective greedy strategy as well as provably
optimal solutions using branch-and-bound search. Moreover, the exact
methodology used reveals a simple renormalization step that improves
approximate solutions obtained by any continuous method. The resulting
performance gain of discrete algorithms is demonstrated on real-world
benchmark data and in extensive Monte Carlo evaluation trials.
1
Introduction
PCA is indispensable as a basic tool for factor analysis and modeling of data. But despite
its power and popularity, one key drawback is its lack of sparseness (i.e., factor loadings
are linear combinations of all the input variables). Yet sparse representations are generally
desirable since they aid human understanding (e.g., with gene expression data), reduce
computational costs and promote better generalization in learning algorithms. In machine
learning, input sparseness is closely related to feature selection and automatic relevance
determination, problems of enduring interest to the learning community.
The earliest attempts at ?sparsifying? PCA in the statistics literature consisted of simple
axis rotations and component thresholding [1] with the underlying goal being essentially
that of subset selection, often based on the identification of principal variables [8]. The
first true computational technique, called SCoTLASS by Jolliffe & Uddin [6], provided
a proper optimization framework using Lasso [12] but it proved to be computationally
impractical. Recently, Zou et al. [14] proposed an elegant algorithm (SPCA) using their
?Elastic Net? framework for L1 -penalized regression on regular PCs, solved very efficiently
using least angle regression (LARS). Subsequently, d?Aspremont et al. [3] relaxed the
?hard? cardinality constraint and solved for a convex approximation using semi-definite
programming (SDP). Their ?direct? formulation for sparse PCA (called DSCPA) has
yielded promising results that are comparable to (if not better than) Zou et al.?s Lasso-based
method, as demonstrated on the standard ?Pit Props? benchmark dataset, known in the
statistics community for its lack of sparseness and subsequent difficulty of interpretation.
We pursued an alternative approach using a spectral formulation based on the variational
principle of the Courant-Fischer ?Min-Max? theorem for solving maximal eigenvalue
problems in dimensionality-constrained subspaces. By its very nature, the discrete view
leads to a simple post-processing (renormalization) step that improves any approximate
solution (e.g., those given in [6, 14, 3]), and also provides bounds on (sub)optimality.
More importantly, it points the way towards exact and provably optimal solutions using
branch-and-bound search [9]. Our exact computational strategy parallels that of Ko et
al. [7] who solved a different optimization problem (maximizing entropy with bounds
on determinants). In the experiments we demonstrate the power of greedy and exact
algorithms by first solving for the optimal sparse factors of the real-world ?Pit Props? data,
a de facto benchmark used by [6, 14, 3], and then present summary findings from a large
comparative study using extensive Monte Carlo evaluation of the leading algorithms.
2
Sparse PCA Formulation
Sparse PCA can be cast as a cardinality-constrained quadratic program (QP): given a
n
, maximize the quadratic form
symmetric positive-definite (covariance) matrix A ? S+
0
n
x Ax (variance) with a sparse vector x ? R having no more than k non-zero elements:
x0 A x
(1)
x0 x = 1
card(x) ? k
where card(x) denotes the L0 norm. This optimization problem is non-convex, NP-hard
and therefore intractable. Assuming we can solve for the optimal vector x
?, subsequent
sparse factors can be obtained using recursive deflation of A, as in standard numerical
routines. The sparseness is controlled by the value(s) of k (in different factors) and can be
viewed as a design parameter or as an unknown quantity itself (known only to the oracle).
Alas, there are currently no guidelines for setting k, especially with multiple factors (e.g.,
orthogonality is often relaxed) and unlike ordinary PCA some decompositions may not be
unique.1 Indeed, one of the contributions of this paper is in providing a sound theoretical
basis for selecting k, thus clarifying the ?art? of crafting sparse PCA factors.
max
subject to
Note that without the cardinality constraint, the quadratic form in Eq.(1) is a RayleighRitz quotient obeying the analytic bounds ?min (A) ? x0 Ax/x0 x ? ?max (A) with
corresponding unique eigenvector solutions. Therefore, the optimal objective value
(variance) is simply the maximum eigenvalue ?n (A) of the principal eigenvector x
? = un
? Note: throughout the paper the rank of all (?i , ui ) is in increasing order of magnitude,
hence ?min = ?1 and ?max = ?n . With the (nonlinear) cardinality constraint however,
the optimal objective value is strictly less than ?max (A) for k < n and the principal
eigenvectors are no longer instrumental in the solution. Nevertheless, we will show that the
eigenvalues of A continue to play a key role in the analysis and design of exact algorithms.
2.1
Optimality Conditions
First, let us consider what conditions must be true if the oracle revealed the optimal solution
to us: a unit-norm vector x
? with cardinality k yielding the maximum objective value v ? .
This would necessarily imply that x
?0 A x
? = z 0 Ak z where z ? Rk contains the same k
non-zero elements in x
? and Ak is the k ? k principal submatrix of A obtained by deleting
the rows and columns corresponding to the zero indices of x
? (or equivalently, by extracting
the rows and columns of non-zero indices). Like x
?, the k-vector z will be unit norm and
z 0 Ak z is then equivalent to a standard unconstrained Rayleigh-Ritz quotient. Since this
subproblem?s maximum variance is ?max (Ak ), then this must be the optimal objective v ? .
We will now summarize this important observation with the following proposition.
1
We should note that the multi-factor version of Eq.(1) is ill-posed without additional constraints
on basis orthogonality, cardinality, variable redundancy, ordinal rank and allocation of variance.
Proposition 1. The optimal value v ? of the sparse PCA optimization problem in Eq.(1)
is equal to ?max (A?k ), where A?k is the k ? k principal submatrix of A with the largest
maximal eigenvalue. In particular, the non-zero elements of the optimal sparse factor x
? are
exactly equal to the elements of u?k , the principal eigenvector of A?k .
This underscores the inherent combinatorial nature of sparse PCA and the equivalent
class of cardinality-constrained optimization problems. However, despite providing an
exact formulation and revealing necessary conditions for optimality (and in such simple
matrix terms), this proposition does not suggest an efficient method for actually finding the
principal submatrix A?k ? short of an enumerative exhaustive search, which is impractical
for n > 30 due to the exponential growth of possible submatrices. Still, exhaustive search
is a viable method for small n which guarantees optimality for ?toy problems? and small
real-world datasets, thus calibrating the quality of approximations (via the optimality gap).
2.2
Variational Renormalization
Proposition 1 immediately suggests a rather simple but (as it turns out) quite effective
computational ?fix? for improving candidate sparse PC factors obtained by any continuous
algorithm (e.g., the various solutions found in [6, 14, 3]).
Proposition 2. Let x
? be a unit-norm candidate factor with cardinality k as found by any
(approximation) technique. Let z? be the non-zero subvector of x
? and uk be the principal
(maximum) eigenvector of the submatrix Ak defined by the same non-zero indices of x
?. If
z? 6= uk (Ak ), then x
? is not the optimal solution. Nevertheless, by replacing x
??s nonzero
elements with those of uk we guarantee an increase in the variance, from v? to ?k (Ak ).
This variational renormalization suggests (somewhat ironically) that given a continuous
(approximate) solution, it is almost certainly better to discard the loadings and keep only
the sparsity pattern with which to solve the smaller unconstrained subproblem for the
indicated submatrix Ak . This simple procedure (or ?fix? as referred to herein) can never
decrease the variance and will surely improve any continuous algorithm?s performance.
In particular, the rather expedient but ad-hoc technique of ?simple thresholding? (ST) [1]
? i.e., setting the n ? k smallest absolute value loadings of un (A) to zero and then
normalizing to unit-norm ? is therefore not recommended for sparse PCA. In Section 3,
we illustrate how this ?straw-man? algorithm can be enhanced with proper renormalization.
Consequently, past performance benchmarks using this simple technique may need revision
? e.g., previous results on the ?Pit Props? dataset (Section 3). Indeed, most of the sparse
PCA factors published in the literature can be readily improved (almost by inspection) with
the proper renormalization, and at the mere cost of a single k-by-k eigen-decomposition.
2.3
Eigenvalue Bounds
Recall that the objective value v ? in Eq.(1) is bounded by the spectral radius ?max (A)
(by the Rayleigh-Ritz theorem). Furthermore, the spectrum of A?s principal submatrices
was shown to play a key role in defining the optimal solution. Not surprisingly, the two
eigenvalue spectra are related by an inequality known as the Inclusion Principle.
Theorem 1 Inclusion Principle. Let A be a symmetric n ? n matrix with spectrum ? i (A)
and let Ak be any k ? k principal submatrix of A for 1 ? k ? n with eigenvalues ?i (Ak ).
For each integer i such that 1 ? i ? k
?i (A) ? ?i (Ak ) ? ?i+n?k (A)
(2)
Proof. The proof, which we omit, is a rather straightforward consequence of imposing a
sparsity pattern of cardinality k as an additional orthogonality constraint in the variational
inequality of the Courant-Fischer ?Min-Max? theorem (see [13] for example).
In other words, the eigenvalues of a symmetric matrix form upper and lower bounds for the
eigenvalues of all its principal submatrices. A special case of Eq.(2) with k = n ? 1 leads
to the well-known eigenvalue interlacing property of symmetric matrices:
?1 (An ) ? ?1 (An?1 ) ? ?2 (An ) ? . . . ? ?n?1 (An ) ? ?n?1 (An?1 ) ? ?n (An )
(3)
Hence, the spectra of An and An?1 interleave or interlace each other, with the eigenvalues
of the larger matrix ?bracketing? those of the smaller one. Note that for positive-definite
symmetric matrices (covariances), augmenting Am to Am+1 (adding a new variable) will
always expand the spectral range: reducing ?min and increasing ?max . Thus for eigenvalue
maximization, the inequality constraint card(x) ? k in Eq.(1) is a tight equality at the
optimum. Therefore, the maximum variance is achieved at the preset upper limit k of
cardinality. Moreover, the function v ? (k), the optimal variance for a given cardinality, is
2
2
monotone increasing with range [?max
(A), ?max (A)], where ?max
is the largest diagonal
element (variance) in A. Hence, a concise and informative way to quantify the performance
of an algorithm is to plot its variance curve v?(k) and compare it with the optimal v ? (k).
Since we seek to maximize variance, the relevant inclusion bound is obtained by setting
i = k in Eq.(2), which yields lower and upper bounds for ?k (Ak ) = ?max (Ak ),
?k (A) ? ?max (Ak ) ? ?max (A)
(4)
This shows that the k-th smallest eigenvalue of A is a lower bound for the maximum
variance possible with cardinality k. The utility of this lower bound is in doing away with
the ?guesswork? (and the oracle) in setting k. Interestingly, we now see that the spectrum of
A which has traditionally guided the selection of eigenvectors for dimensionality reduction
(e.g., in classical PCA), can also be consulted in sparse PCA to help pick the cardinality
required to capture the desired (minimum) variance. The lower bound ?k (A) is also useful
for speeding up branch-and-bound search (see next Section). Note that if ? k (A) is close to
?max (A) then practically any principal submatrix Ak can yield a near-optimal solution.
The right-hand inequality in Eq.(4) is a fixed (loose) upper bound ?max (A) for all k.
But in branch-and-bound search, any intermediate subproblem Am , with k ? m ? n,
yields a new and tighter bound ?max (Am ) for the objective v ? (k). Therefore, all bound
computations are efficient and relatively inexpensive (e.g., using the power method).
The inclusion principle also leads to some interesting constraints on nested submatrices.
For example, among all m possible (m ? 1)-by-(m ? 1) principal submatrices of A m ,
obtained by deleting the j-th row and column, there is at least one submatrix A m?1 = A\j
whose maximal eigenvalue is a major fraction of its parent (e.g., see p. 189 in [4])
m?1
? j : ?m?1 (A\j ) ?
?m (Am )
(5)
m
The implication of this inequality for search algorithms is that it is simply not possible for
the spectral radius of every submatrix A\j to be arbitrarily small, especially for large m.
Hence, with large matrices (or large cardinality) nearly all the variance ?n (A) is captured.
2.4
Combinatorial Optimization
Given Propositions 1 and 2, the inclusion principle, the interlacing property and especially
the monotonic nature of the variance curves v(k), a general class of (binary) integer
programming (IP) optimization techniques [9] seem ideally suited for sparse PCA. Indeed,
a greedy technique like backward elimination is already suggested by the bound in Eq.(5):
start with the full index set I = {1, 2, . . . , n} and sequentially delete the variable j
which yields the maximum ?max (A\j ) until only k elements remain. However, for
small cardinalities k << n, the computational cost of backward search can grow to near
maximum complexity ? O(n4 ). Hence its counterpart forward selection is preferred:
start with the null index set I = {} and sequentially add the variable j which yields the
maximum ?max (A+j ) until k elements are selected. Forward greedy search has worstcase complexity < O(n3 ). The best overall strategy for this problem was empirically
found to be a bi-directional greedy search: run a forward pass (from 1 to n) plus a second
(independent) backward pass (from n to 1) and pick the better solution at each k. This
proved to be remarkably effective under extensive Monte Carlo evaluation and with realworld datasets. We refer to this discrete algorithm as greedy sparse PCA or GSPCA.
Despite the expediency of near-optimal greedy search, it is nevertheless worthwhile to
invest in optimal solution strategies, especially if the sparse PCA problem is in the
application domain of finance or engineering, where even a small optimality gap can
accrue substantial losses over time. As with Ko et al. [7], our branch-and-bound relies
on computationally efficient bounds ? in our case, the upper bound in Eq.(4), used on all
active subproblems in a (FIFO) queue for depth-first search. The lower bound in Eq.(4)
can be used to sort the queue for a more efficient best-first search [9]. This exact algorithm
(referred to as ESPCA) is guaranteed to terminate with the optimal solution. Naturally,
the search time depends on the quality (variance) of initial candidates. The solutions found
by dual-pass greedy search (GSPCA) were found to be ideal for initializing ESPCA, as
their quality was typically quite high. Note however, that even with good initializations,
branch-and-bound search can take a long time (e.g. 1.5 hours for n = 40, k = 20). In
practice, early termination with set thresholds based on eigenvalue bounds can be used.
In general, a cost-effective strategy that we can recommend is to first run GSPCA (or at
least the forward pass) and then either settle for its (near-optimal) variance or else use it
to initialize ESPCA for finding the optimal solution. A full GSPCA run has the added
benefit of giving near-optimal solutions for all cardinalities at once, with run-times that are
typically O(102 ) faster than a single approximation with a continuous method.
3
Experiments
We evaluated the performance of GSPCA (and validated ESPCA) on various synthetic
covariance matrices with 10 ? n ? 40 as well as real-world datasets from the UCI ML
repository with excellent results. We present few typical examples in order to illustrate the
advantages and power of discrete algorithms. In particular, we compared our performance
against 3 continuous techniques: simple thresholding (ST) [1], SPCA using an ?Elastic
Net? L1 -regression [14] and DSPCA using semidefinite programming [3].
We first revisited the ?Pit Props? dataset [5] which has become a standard benchmark and
a classic example of the difficulty of interpreting fully loaded factors with standard PCA.
The first 6 ordinary PCs capture 87% of the total variance, so following the methodology in
[3], we compared the explanatory power of our exact method (ESPCA) using 6 sparse PCs.
Table 1 shows the first 3 PCs and their loadings. SPCA captures 75.8% of the variance with
a cardinality pattern of 744111 (the k?s for the 6 PCs) thus totaling 18 non-zero loadings
[14] whereas DSPCA captures 77.3% with a sparser cardinality pattern 623111 totaling 14
non-zero loadings [3]. We aimed for an even sparser 522111 pattern (with only 12 non-zero
loadings) yet captured nearly the same variance: 75.9% ? i.e., more than SPCA with 18
loadings and slightly less than DSPCA with 14 loadings.
Using the evaluation protocol in [3], we compared the cumulative variance and cumulative
cardinality with the published results of SPCA and DSPCA in Figure 1. Our goal was to
match the explained variance but do so with a sparser representation. The ESPCA loadings
in Table 1 are optimal under the definition given in Section 2. The run-time of ESPCA,
including initialization with a bi-directional pass of GSPCA, was negligible for this dataset
(n = 13). Computing each factor took less than 50 msec in Matlab 7.0 on a 3GHz P4.
x1
-.477
0
0
-.560
0
0
-.480
0
0
SPCA : PC1
PC2
PC3
DSPCA : PC1
PC2
PC3
ESPCA : PC1
PC2
PC3
x2
-.476
0
0
-.583
0
0
-.491
0
0
x3
0
.785
0
0
.707
0
0
.707
0
x4
0
.620
0
0
.707
0
0
.707
0
x5
.177
0
.640
0
0
0
0
0
0
x6
0
0
.589
0
0
-.793
0
0
-.814
x7
-.250
0
.492
-.263
0
-.610
-.405
0
-.581
x8
-.344
-.021
0
-.099
0
0
0
0
0
x9
-.416
0
0
-.371
0
0
-.423
0
0
x10
-.400
0
0
-.362
0
0
-.431
0
0
x11
0
0
0
0
0
0
0
0
x12
0
.013
0
0
0
0
0
0
0
x13
0
0
-.015
0
0
.012
0
0
0
Table 1: Loadings for first 3 sparse PCs of the Pit Props data. See Figure 1(a) for plots of the
corresponding cumulative variances. Original SPCA and DSPCA loadings taken from [14, 3].
1
18
SPCA
DSPCA
ESPCA
0.9
SPCA
DSPCA
ESPCA
16
Cumulative Cardinality
Cumulative Variance
0.8
0.7
0.6
0.5
0.4
0.3
14
12
10
8
0.2
6
0.1
0
4
1
2
3
4
# of PCs
5
6
1
2
3
4
5
6
# of PCs
(a)
(b)
Figure 1: Pit Props: (a) cumulative variance and (b) cumulative cardinality for first 6 sparse PCs.
Sparsity patterns (cardinality ki for PCi , with i = 1, 2, . . . , 6) are 744111 for SPCA (magenta ?),
623111 for DSPCA (green ) and an optimal 522111 for ESPCA (red ?). The factor loadings for the
first 3 sparse PCs are shown in Table 1. Original SPCA and DSPCA results taken from [14, 3].
To specifically demonstrate the benefits of the variational renormalization of Section 2.2,
consider SPCA?s first sparse factor in Table 1 (the 1st row of SPCA block) found by iterative
(L1 -penalized) optimization and unit-norm scaling. It captures 28% of the total data
variance, but after the variational renormalization the variance increases to 29%. Similarily,
the first sparse factor of DSPCA in Table 1 (1st row of DSPCA block) captures 26.6% of
the total variance, whereas after variational renormalization it captures 29% ? a gain of
2.4% for the mere additional cost of a 7-by-7 eigen-decomposition. Given that variational
renormalization results in the maximum variance possible for the indicated sparsity pattern,
omitting such a simple post-processing step is counter-productive, since otherwise the
approximations would be, in a sense, doubly sub-optimal: both globally and ?locally? in
the subspace (subset) of the sparsity pattern found.
We now give a representative summary of our extensive Monte Carlo (MC) evaluation
of GSPCA and the 3 continuous algorithms. To show the most typical or average-case
performance, we present results with random covariance matrices from synthetic stochastic
Brownian processes of various degrees of smoothness, ranging from sub-Gaussian to superGaussian. Every MC run consisted of 50,000 covariance matrices and the (normalized)
variance curves v?(k). For each matrix, ESPCA was used to find the optimal solution as
?ground truth? for subsequent calibration, analysis and performance evaluation.
For SPCA we used the LARS-based ?Elastic Net? SPCA Matlab toolbox of Sjo? strand [10]
which is equivalent to Zou et al.?s SPCA source code, which is also freely available in R.
For DSPCA we used the authors? own Matlab source code [2] which uses the SDP toolbox
SeDuMi1.0x [11]. The main DSPCA routine PrimalDec(A, k) was called with k?1 instead
of k, for all k > 2, as per the recommended calibration (see documentation in [3, 2]).
In our MC evaluations, all continuous methods (ST, SPCA and DSPCA) had variational
renormalization post-processing (applied to their the ?declared? solution). Note that
comparing GSPCA with the raw output of these algorithms would be rather pointless, since
0
11
10
ST
SPCA
DSPCA
GSPCA
10
9
?1
10
log ? frequency
variance v(k)
8
7
6
5
4
3
?2
10
?3
10
2
DSPCA (original)
DSPCA + Fix
Optimal
1
?4
0
1
2
3
4
5
6
7
8
9
10
10
0.85
0.9
cardinality (k)
0.95
1
optimality ratio
(a)
(b)
Figure 2: (a) Typical variance curve v(k) for a continuous algorithm without post-processing
(original: dash green) and with variational renormalization (+ Fix: solid green). Optimal variance
(black ?) by ESPCA. At k = 4 optimality ratio increases from 0.65 to 0.86 (a 21% gain). (b) Monte
Carlo study: log-likelihood of optimality ratio at max-complexity (k = 8, n = 16) for ST (blue []),
DSPCA (green ), SPCA (magenta ?) and GSPCA (red ?). Continuous methods were ?fixed? in (b).
1
1
0.99
0.9
0.8
0.7
0.97
frequency
mean optimality ratio
0.98
0.96
0.95
0.6
0.5
0.4
0.3
0.94
ST
SPCA
DSPCA
GSPCA
0.93
0.92
2
4
6
8
10
cardinality (k)
12
14
16
0.2
ST
SPCA
DSPCA
GSPCA
0.1
0
2
4
6
8
10
12
14
16
cardinality (k)
(a)
(b)
Figure 3: Monte Carlo summary statistics: (a) means of the distributions of optimality ratio (in
Figure 2(b)) for all k and (b) estimated probability of finding the optimal solution for each cardinality.
without the ?fix? their variance curves are markedly diminished, as in Figure 2(a).
Figure 2(b) shows the histogram of the optimality ratio ? i.e., ratio of the captured to
optimal variance ? shown here at ?half-sparsity? (k = 8, n = 16) from a typical MC
run of 50,000 different covariances matrices. In order to view the (one-sided) tails of
the distributions we have plotted the log of the histogram values. Figure 3(a) shows the
corresponding mean values of the optimality ratio for all k. Among continuous algorithms,
the SDP-based DSPCA was generally more effective (almost comparable to GSPCA). For
the smaller matrices (n < 10), LARS-based SPCA matched DSPCA for all k. In terms of
complexity and speed however, SPCA was about 40 times faster than DSPCA. But GSPCA
was 30 times faster than SPCA. Finally, we note that even simple thresholding (ST),
once enhanced with the variational renormalization, performs quite adequately despite its
simplicity, as it captures at least 92% of the optimal variance, as seen in Figure 3(a).
Figure 3(b) shows an alternative but more revealing performance summary: the fraction
of the (50,000) trials in which the optimal solution was actually found (essentially,
the likelihood of ?success?). This all-or-nothing performance measure elicits important
differences between the algorithms. In practical terms, only GSPCA is capable of finding
the optimal factor more than 90% of the time (vs. 70% for DSPCA). Naturally, without the
variational ?fix? (not shown) continuous algorithms rarely ever found the optimal solution.
4
Discussion
The contributions of this paper can be summarized as: (1) an exact variational formulation
of sparse PCA, (2) requisite eigenvalue bounds, (3) a principled choice of k, (4) a simple
renormalization ?fix? for any continuous method, (5) fast and effective greedy search
(GSPCA) and (6) a less efficient but optimal method (ESPCA). Surprisingly, simple
thresholding of the principal eigenvector (ST) was shown to be rather effective, especially
given the perceived ?straw-man? it was considered to be. Naturally, its performance will
vary with the effective rank (or ?eigen-gap?) of the covariance matrix. In fact, it is not
hard to show that if A is exactly rank-1, then ST is indeed an optimal strategy for all k.
However, beyond such special cases, continuous methods can not ultimately be competitive
with discrete algorithms without the variational renormalization ?fix? in Section 2.2.
We should note that the somewhat remarkable effectiveness of GSPCA is not entirely
unexpected and is supported by empirical observations in the combinatorial optimization
literature: that greedy search with (sub)modular cost functions having the monotonicity
property (e.g., the variance curves v?(k)) is known to produce good results [9]. In terms of
quality of solutions, GSPCA consistently out-performed continuous algorithms, with runtimes that were typically O(102 ) faster than LARS-based SPCA and roughly O(103 ) faster
than SDP-based DSPCA (Matlab CPU times averaged over all k).
Nevertheless, we view discrete algorithms as complementary tools, especially since the
leading continuous algorithms have distinct advantages. For example, with very highdimensional datasets (e.g., n = 10, 000), Zou et al.?s LARS-based method is currently the
only viable option, since it does not rely on computing or storing a huge covariance matrix.
Although d?Aspremont et al. mention the possibility of solving ?larger? systems much
faster (using Nesterov?s 1st-order method [3]), this would require a full matrix in memory
(same as discrete algorithms). Still, their SDP formulation has an elegant robustness
interpretation and can also be applied to non-square matrices (i.e., for a sparse SVD).
Acknowledgments
The authors would like to thank Karl Sjo? strand (DTU) for his customized code and helpful advice in
using the LARS-SPCA toolbox [10] and Gert Lanckriet (Berkeley) for providing the Pit Props data.
References
[1] J. Cadima and I. Jolliffe. Loadings and correlations in the interpretation of principal
components. Applied Statistics, 22:203?214, 1995.
[2] A. d?Aspremont. DSPCA Toolbox. http://www.princeton.edu/?aspremon/DSPCA.htm.
[3] A. d?Aspremont, L. El Ghaoui, M. I. Jordan, and G. R. G. Lanckriet. A Direct Formulation for
Sparse PCA using Semidefinite Programming. In Advances in Neural Information Processing
Systems (NIPS). Vancouver, BC, December 2004.
[4] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge Press, Cambridge, England, 1985.
[5] J. Jeffers. Two cases studies in the application of principal components. Applied Statistics,
16:225?236, 1967.
[6] I. T. Jolliffe and M. Uddin. A Modified Principal Component Technique based on the Lasso.
Journal of Computational and Graphical Statistics, 12:531?547, 2003.
[7] C. Ko, J. Lee, and M. Queyranne. An Exact Algorithm for Maximum Entropy Sampling.
Operations Research, 43(4):684?691, July-August 1995.
[8] G. McCabe. Principal variables. Technometrics, 26:137?144, 1984.
[9] G. L. Nemhauser and L. A. Wolsey. Integer and Combinatorial Optimization. John Wiley, New
York, 1988.
[10] K. Sj?ostrand. Matlab implementation of LASSO, LARS, the Elastic Net and SPCA. Informatics
and Mathematical Modelling, Technical University of Denmark (DTU), 2005.
[11] J. F. Sturm. SeDuMi1.0x, a MATLAB Toolbox for Optimization over Symmetric Cones.
Optimization Methods and Software, 11:625?653, 1999.
[12] R. Tibshirani. Regression shrinkage and selection via Lasso. Journal of the Royal Statistical
Society B, 58:267?288, 1995.
[13] J. H. Wilkinson. The Algebraic Eigenvalue Problem. Clarendon Press, Oxford, England, 1965.
[14] H. Zou, T. Hastie, and R. Tibshirani. Sparse Principal Component Analysis. Technical Report,
Statistics Department, Stanford University, 2004.
| 2780 |@word trial:2 determinant:1 version:1 repository:1 interleave:1 loading:14 norm:6 instrumental:1 termination:1 seek:2 covariance:8 decomposition:3 pick:2 concise:1 mention:1 solid:1 reduction:1 initial:1 contains:1 dspca:28 selecting:1 bc:1 ala:1 interestingly:1 past:1 com:2 comparing:1 yet:2 must:2 readily:1 john:1 subsequent:3 numerical:1 informative:1 analytic:1 plot:2 v:1 greedy:11 pursued:1 selected:1 half:1 inspection:1 short:1 provides:1 revisited:1 ironically:1 mathematical:1 direct:2 become:1 viable:2 doubly:1 x0:4 indeed:4 roughly:1 sdp:5 multi:1 globally:1 cpu:1 cardinality:28 increasing:3 revision:1 provided:1 moreover:2 underlying:1 bounded:1 matched:1 mccabe:1 null:1 israel:1 what:1 eigenvector:5 finding:5 impractical:2 guarantee:2 berkeley:1 every:2 growth:1 finance:2 exactly:2 facto:1 bio:1 unit:5 uk:3 omit:1 positive:2 negligible:1 engineering:1 limit:1 consequence:1 despite:4 ak:15 oxford:1 black:1 plus:1 initialization:2 suggests:2 pit:7 range:3 bi:2 averaged:1 unique:2 practical:1 acknowledgment:1 horn:1 recursive:1 practice:1 definite:3 block:2 x3:1 procedure:1 empirical:1 submatrices:5 revealing:2 projection:1 word:1 regular:1 suggest:1 close:1 selection:5 www:1 equivalent:3 gspca:18 demonstrated:2 maximizing:1 jerusalem:1 straightforward:1 convex:4 focused:1 simplicity:1 immediately:1 importantly:1 ritz:2 his:1 classic:1 gert:1 traditionally:1 enhanced:2 play:2 exact:11 programming:4 us:1 lanckriet:2 element:8 documentation:1 role:2 subproblem:3 solved:3 capture:9 initializing:1 decrease:1 counter:1 substantial:1 principled:1 ui:1 complexity:4 ideally:1 productive:1 nesterov:1 wilkinson:1 ultimately:1 solving:3 tight:1 basis:2 expedient:1 htm:1 various:3 distinct:1 fast:1 effective:8 monte:6 pci:1 exhaustive:2 whose:2 quite:3 posed:1 solve:2 larger:2 modular:1 stanford:1 otherwise:1 statistic:7 fischer:2 itself:1 ip:1 hoc:1 advantage:2 eigenvalue:18 net:4 took:1 maximal:4 p4:1 relevant:1 uci:1 invest:1 parent:1 optimum:1 produce:1 comparative:1 help:1 illustrate:2 ac:1 augmenting:1 progress:1 eq:11 c:1 quotient:2 quantify:1 guided:1 radius:2 drawback:1 closely:1 lars:7 subsequently:1 stochastic:1 human:1 settle:1 elimination:1 require:1 fix:8 generalization:1 yweiss:1 proposition:6 tighter:1 strictly:1 practically:1 considered:1 ground:1 major:1 vary:1 early:1 smallest:2 perceived:1 combinatorial:4 currently:2 largest:2 interlace:1 supergaussian:1 tool:2 always:1 gaussian:1 modified:1 rather:5 shrinkage:1 totaling:2 earliest:1 ax:2 l0:1 validated:1 consistently:1 rank:4 likelihood:2 mainly:1 modelling:1 underscore:1 contrast:1 am:5 sense:1 helpful:1 el:1 typically:3 explanatory:1 expand:1 provably:2 x11:1 overall:1 among:2 ill:1 dual:1 constrained:4 art:1 special:2 initialize:1 field:1 equal:2 never:1 having:2 once:2 sampling:1 runtimes:1 x4:1 uddin:2 nearly:2 promote:1 np:2 recommend:1 report:1 inherent:1 few:1 attempt:1 technometrics:1 interest:1 huge:1 possibility:1 evaluation:7 certainly:1 yielding:1 pc:11 semidefinite:2 implication:1 aspremon:1 moghaddam:1 capable:1 necessary:1 desired:1 plotted:1 accrue:1 theoretical:1 delete:1 merl:4 column:3 modeling:1 similarily:1 maximization:1 ordinary:2 cost:6 subset:2 johnson:1 synthetic:2 st:13 huji:1 fifo:1 lee:1 informatics:2 straw:2 x9:1 leading:2 toy:1 de:1 summarized:1 ad:1 depends:1 performed:1 view:3 doing:1 red:2 start:2 sort:1 option:1 parallel:1 competitive:1 shai:1 contribution:2 il:1 square:1 variance:38 who:1 efficiently:1 loaded:1 yield:5 directional:2 identification:1 raw:1 mere:2 carlo:6 mc:4 published:2 definition:1 inexpensive:1 against:1 frequency:2 naturally:3 proof:2 gain:3 proved:2 dataset:4 recall:1 x13:1 improves:2 dimensionality:2 routine:2 actually:2 clarendon:1 courant:2 x6:1 methodology:2 wei:1 improved:1 formulation:8 evaluated:1 furthermore:1 until:2 correlation:1 hand:1 sturm:1 replacing:1 nonlinear:1 lack:2 quality:4 indicated:2 usa:2 omitting:1 calibrating:1 consisted:2 true:2 normalized:1 counterpart:1 adequately:1 hence:5 equality:1 symmetric:6 nonzero:1 x5:1 jeffers:1 demonstrate:2 performs:1 l1:3 interpreting:1 ranging:1 variational:15 recently:1 rotation:1 qp:1 empirically:1 tail:1 interpretation:3 refer:1 cambridge:4 scotlass:1 imposing:1 smoothness:1 automatic:1 unconstrained:2 inclusion:5 guesswork:1 had:1 calibration:2 longer:1 add:1 brownian:1 own:1 recent:1 discard:1 indispensable:1 inequality:5 binary:1 continue:1 arbitrarily:1 success:1 captured:3 minimum:1 additional:3 relaxed:2 baback:2 somewhat:2 seen:1 freely:1 surely:1 maximize:2 recommended:2 july:1 semi:1 branch:6 multiple:1 desirable:1 sound:1 interlacing:2 full:3 x10:1 technical:2 faster:6 determination:1 match:1 england:2 long:1 post:4 controlled:1 avidan:2 basic:1 regression:4 essentially:2 ko:3 histogram:2 achieved:1 whereas:2 remarkably:1 else:1 grow:1 bracketing:1 source:2 unlike:1 pc3:3 markedly:1 subject:1 elegant:2 december:1 seem:1 jordan:1 effectiveness:1 integer:3 extracting:1 near:5 cadima:1 ideal:1 spca:27 revealed:1 intermediate:1 hastie:1 lasso:5 reduce:1 expression:1 pca:21 utility:1 queyranne:1 queue:2 algebraic:1 york:1 matlab:6 generally:2 useful:1 eigenvectors:3 aimed:1 locally:1 http:1 estimated:1 popularity:1 per:1 tibshirani:2 blue:1 discrete:8 sparsifying:1 key:3 redundancy:1 nevertheless:4 threshold:1 backward:3 relaxation:1 monotone:1 fraction:2 cone:1 run:7 angle:1 realworld:1 throughout:1 almost:3 pc2:3 scaling:1 comparable:2 submatrix:9 entirely:1 bound:27 ki:1 expediency:1 guaranteed:1 dash:1 quadratic:3 encountered:1 yielded:1 oracle:3 constraint:8 orthogonality:3 n3:1 x2:1 software:1 x7:1 declared:1 speed:1 min:5 optimality:13 relatively:1 x12:1 department:1 combination:1 smaller:3 remain:1 slightly:1 n4:1 explained:1 ghaoui:1 sided:1 taken:2 computationally:2 turn:1 loose:1 deflation:1 jolliffe:3 ordinal:1 available:1 operation:1 worthwhile:1 away:1 spectral:6 alternative:3 yair:1 robustness:1 eigen:3 original:4 denotes:1 graphical:1 giving:1 especially:6 classical:1 society:1 crafting:1 objective:6 already:1 quantity:1 added:1 strategy:6 diagonal:1 nemhauser:1 subspace:2 elicits:1 card:3 thank:1 clarifying:1 enumerative:1 denmark:1 assuming:1 code:3 index:5 providing:3 ratio:8 hebrew:1 equivalently:1 sjo:2 subproblems:1 design:2 guideline:1 proper:3 implementation:1 unknown:1 upper:5 observation:2 datasets:4 benchmark:5 defining:1 ever:1 consulted:1 pc1:3 august:1 community:2 cast:1 subvector:1 required:1 extensive:4 toolbox:5 herein:1 hour:1 nip:1 beyond:1 suggested:1 pattern:8 sparsity:6 summarize:1 program:1 max:22 including:1 green:4 deleting:2 memory:1 power:5 royal:1 difficulty:2 rely:1 customized:1 improve:1 imply:1 dtu:2 axis:1 x8:1 aspremont:4 speeding:1 understanding:1 literature:3 vancouver:1 loss:1 fully:1 interesting:1 wolsey:1 allocation:1 remarkable:1 degree:1 thresholding:5 principle:5 storing:1 row:5 karl:1 penalized:2 summary:4 surprisingly:2 supported:1 wide:1 absolute:1 sparse:31 benefit:2 ghz:1 curve:6 depth:1 world:4 cumulative:7 forward:4 author:2 sj:1 approximate:4 preferred:1 gene:1 keep:1 ml:1 monotonicity:1 sequentially:2 reveals:1 active:1 spectrum:5 continuous:17 search:18 un:2 iterative:1 table:6 promising:1 nature:3 terminate:1 elastic:4 improving:1 excellent:1 necessarily:1 zou:5 domain:1 protocol:1 main:1 nothing:1 complementary:1 x1:1 advice:1 referred:2 representative:1 renormalization:15 aid:1 wiley:1 sub:4 msec:1 obeying:1 exponential:1 candidate:3 pointless:1 theorem:4 rk:1 magenta:2 espca:14 normalizing:1 intractable:1 enduring:1 adding:1 magnitude:1 sparseness:4 gap:3 sparser:3 suited:1 entropy:2 rayleigh:2 simply:2 unexpected:1 strand:2 monotonic:1 nested:1 truth:1 worstcase:1 relies:1 ma:2 prop:7 goal:2 viewed:1 consequently:1 towards:1 man:2 hard:5 diminished:1 typical:4 specifically:1 reducing:1 preset:1 principal:19 called:3 total:3 pas:5 svd:1 rarely:1 highdimensional:1 relevance:1 requisite:1 princeton:1 |
1,960 | 2,781 | Analyzing Coupled Brain Sources:
Distinguishing True from Spurious Interaction
1,2
?
Guido Nolte1 , Andreas Ziehe3 , Frank Meinecke1 and Klaus-Robert Muller
1
2
Fraunhofer FIRST.IDA, Kekul?estr. 7, 12489 Berlin, Germany
Dept. of CS, University of Potsdam, August-Bebel-Strasse 89, 14482 Potsdam, Germany
3
TU Berlin, Inst. for Software Engineering, Franklinstr. 28/29, 10587 Berlin, Germany
{nolte,ziehe,meinecke,klaus}@first.fhg.de
Abstract
When trying to understand the brain, it is of fundamental importance to
analyse (e.g. from EEG/MEG measurements) what parts of the cortex
interact with each other in order to infer more accurate models of brain
activity. Common techniques like Blind Source Separation (BSS) can estimate brain sources and single out artifacts by using the underlying assumption of source signal independence. However, physiologically interesting brain sources typically interact, so BSS will?by construction?
fail to characterize them properly. Noting that there are truly interacting
sources and signals that only seemingly interact due to effects of volume
conduction, this work aims to contribute by distinguishing these effects.
For this a new BSS technique is proposed that uses anti-symmetrized
cross-correlation matrices and subsequent diagonalization. The resulting
decomposition consists of the truly interacting brain sources and suppresses any spurious interaction stemming from volume conduction. Our
new concept of interacting source analysis (ISA) is successfully demonstrated on MEG data.
1
Introduction
Interaction between brain sources, phase synchrony or coherent states of brain activity
are believed to be fundamental for neural information processing (e.g. [2, 6, 5]). So it is
an important topic to devise new methods that can more reliably characterize interacting
sources in the brain. The macroscopic nature and the high temporal resolution of electroencephalography (EEG) and magnetoencephalography (MEG) in the millisecond range
makes these measurement technologies ideal candidates to study brain interactions. However, interpreting data from EEG/MEG channels in terms of connections between brain
sources is largely hampered by artifacts of volume conduction, i.e. the fact that activities of
single sources are observable as superposition in all channels (with varying amplitude). So
ideally one would like to discard all?due to volume conduction?seemingly interacting
signals and retain only truly linked brain source activity.
So far neither existing source separation methods nor typical phase synchronization anal-
ysis (e.g. [1, 5] and references therein) can adequately handle signals when the sources
are both superimposed and interacting i.e. non-independent (cf. discussions in [3, 4]). It
is here where we contribute in this paper by proposing a new algorithm to distinguish true
from spurious interaction. A prerequisite to achieve this goal was recently established by
[4]: as a consequence of instantaneous and linear volume conduction, the cross-spectra
of independent sources are real-valued, regardless of the specifics of the volume conductor, number of sources or source configuration. Hence, a non-vanishing imaginary part of
the cross-spectra must necessarily reflect a true interaction. Drawbacks of Nolte?s method
are: (a) cross-spectra for all frequencies in multi-channel systems contain a huge amount
of information and it can be tedious to find the interesting structures, (b) it is very much
possible that the interacting brain consists of several subsystems which are independent
of each other but are not separated by that method, and (c) the method is well suited for
rhythmic interactions while wide-band interactions are not well represented.
A recent different approach by [3] uses BSS as preprocessing step before phase synchronization is measured. The drawback of this method is the assumption that there are not
more sources than sensors, which is often heavily violated because, e.g., channel noise
trivially consists of as many sources as channels, and, furthermore, brain noise can be very
well modelled by assuming thousands of randomly distributed and independent dipoles.
To avoid the drawbacks of either method we will formulate an algorithm called interacting
source analysis (ISA) which is technically based on BSS using second order statistics but
is only sensitive to interacting sources and, thus, can be applied to systems with arbitrary
noise structure. In the next section, after giving a short introduction to BSS as used for this
paper, we will derive some fundamental properties of our new method. In section 3 we will
show in simulated data and real MEG examples that the ISA procedure finds the interacting
components and separates interacting subsystems which are independent of each other.
2
Theory
The fundamental assumption of ICA is that a data matrix X, without loss of generality
assumed to be zero mean, originates from a superposition of independent sources S such
that
X = AS
(1)
where A is called the mixing matrix which is assumed to be invertible. The task is to find
A and hence S (apart from meaningless ordering and scale transformations of the columns
of A and the rows of S) by merely exploiting statistical independence of the sources. Since
independence implies that the sources are uncorrelated we may choose W , the estimated
inverse mixing matrix, such that the covariance matrix of
S? ? W X
(2)
is equal to the identity matrix. This, however, does not uniquely determine W because for
any such W also U W , where U is an arbitrary orthogonal matrix, leads to a unit covariance
? Uniqueness can be restored if we require that W not only diagonalizes the
matrix of S.
covariance matrix but also cross-correlation matrices for various delays ? , i.e. we require
that
W C X (? )W ? = diag
(3)
with
C X (? ) ? hx(t)x? (t + ? )i
(4)
where x(t) is the t.th column of X and h.i means expectation value which is estimated by
the average over t. Although at this stage all expressions are real-valued we introduce a
complex formulation for later use.
Note, that since under the ICA assumption the cross-correlation matrices C S (? ) of the
source signals are diagonal
S
S
Cij
(? ) = hsi (t)si (t + ? )i?ij = Cji
,
the cross-correlation matrices of the mixtures are symmetric:
?
C X (? ) = AC S (? )A? = AC S (? )A? = C X? (? )
(5)
(6)
Hence, the antisymmetric part of C X (? ) can only arise due to meaningless fluctuations and
can be ignored. In fact, the above TDSEP algorithm uses symmetrized versions of C X (? )
[8].
Now, the key and new point of our method is that we will turn the above argument upside down. Since non-interacting sources do not contribute (systematically) to the antisymmetrized correlation matrices
D(? ) ? C X (? ) ? C X? (? )
(7)
any (significant) non-vanishing elements in D(? ) must arise from interacting sources, and
hence the analysis of D(? ) is ideally suited to study the interacting brain. In doing so
we exploit that neuronal interactions necessarily take some time which is well above the
typical time resolution of EEG/MEG measurements.
It is now our goal to identify one or many interacting systems from a suitable spatial transformation which corresponds to a demixing of the systems rather than individual sources.
Although we concentrate on those components which explicitly violate the independence
assumption we will use the technique of simultaneous diagonalization to achieve this goal.
We first note that a diagonalization of D(? ) using a real-valued W is meaningless since
with D(? ) also W D(? )W ? is anti-symmetric and always has vanishing diagonal elements.
Hence D(? ) can only be diagonalized with a complex-valued W with subsequent interpretation of it in terms of a real-valued transformation.
We will here discuss the case where all interacting systems consist of pairs of neuronal
sources. Properties of systems with more than two interacting systems will be discussed
below. Furthermore, for simplicity we assume an even number of channels. Then a realvalued spatial transformation W1 exists such that the set of D(? ) becomes decomposed
into K = N/2 blocks of size 2 ? 2
?
?
0 1
0
0
? ?1 (? ) ?1 0
?
?
?
?
?
..
W1 D(? )W1? = ?
(8)
?
.
0
0
?
?
?
?
0 1
0
0 ?K (? )
?1 0
Each block can be diagonalized e.g. with
(9)
?2
W2 = idK?K ? W
(10)
?2 =
W
and with
1 ?i
1 i
we get
W2 W1 D(? )W1? W2? = diag
(11)
From a simultaneous diagonalization of D(? ) we obtain an estimate of the demixing matrix
? of the true demixing matrix W = W2 W1 . We are interested in the columns of W ?1
W
1
which correspond to the spatial patterns of the interacting sources. Let us denote the N ? 2
submatrix of a matrix B consisting of the (2k ? 1).th and the 2k.th column as (B) k . Then
we can write
?2
(W1?1 )k ? (W ?1 )k W
(12)
and hence the desired spatial patterns of the k.th system are a complex linear superposition
of the (2k ? 1).th and the 2k.th column of W . The subspace spanned in channel-space by
the two interacting sources, denoted as span((A)k ), can now be found by separating real
and imaginary part of W ?1
span((A)k ) = span
<((W ?1 )k ), =((W ?1 )k )
(13)
According to (13) we can calculate from W just the 2D-subspaces spanned by the interacting systems but not the patterns of the sources themselves. The latter would indeed be
impossible because all we analyze are anti-symmetric matrices which are, for each system,
constructed as anti-symmetric outer products of the two respective field patterns. These
anti-symmetric matrices are, apart from an irrelevant global scale, invariant with respect to
a linear and real-valued mixing of the sources within each system.
The general procedure can now be outlined as follows.
1. From the data construct anti-symmetric cross-correlation matrices as defined in
Eq.(7) for reasonable set of delays ? .
2. Find a complex matrix W such that W D(? )W ? is approximately diagonal for all
?.
3. If the system consists of subsystems of paired interactions (and indeed, according to our own experience, very much in practice) the diagonal elements in
W D(? )W ? come in pairs in the form ?i?. Each pair constitutes one interacting system. The corresponding two columns in W ?1 , with separated real and
imaginary parts, form an N ? 4 matrix V with rank 2. The span of V coincides
with the space spanned by the respective system. In practice, V will have two singular values which are just very small rather than exactly zero. The corresponding
singular vectors should then be discarded. Instead of analyzing V in the above
way it is also possible to simply take the real and imaginary part of either one of
the two columns.
4. Similar to the spatial analysis, it is not possible to separate the time-courses of two
interacting sources within one subsystem. In general, two estimated time-courses,
say s?1 (t) and s?2 (t), are an unknown linear combination of the true source activations s1 (t) and s2 (t). To understand the type of interaction it is still recommended
to look at the power and autocorrelation functions. Invariant with respect to linear mixing with one subsystem is the anti-symmetrized cross-correlation between
s?1 (t) and s?2 (t) and, equivalently, the imaginary part of the cross-spectral density.
For the k.th system, these quantities are given by the k.th diagonal ?k (? ) and
their respective Fourier transforms.
While (approximate) simultaneous diagonalization of D(? ) using complex demixing matrices is always possible with pairwise interactions we can expect only block-diagonal structure if a larger number of sources are interacting within one or more subsystems. We will
show below for simulated data that the algorithm still finds these blocks although the actual
goal, i.e. diagonal W D(? )W ? , is not reachable.
3
3.1
Results
Simulated data
Matrices were approximately simultaneously diagonalized with the DOMUNG-algorithm
[7], which was generalized to the complex domain. Here, an initial guess for the demixing
matrix W is successively optimized using a natural gradient approach combined with line
search according to the requirement that the off-diagonals are minimal under the constraint
det(W ) = 1. Special care has to be taken in the choice of the initial guess. Due to the
complex-conjugation symmetry of our problem (i.e., W ? diagonalizes as well as W ) the
initial guess may not be set to a real-valued matrix because then the component of the
gradient in imaginary direction will be zero and W will converge to a real-valued saddle
point.
We simulated two random interacting subsystems of dimensions NA and NB which were
assumed to be mutually independent. The two subsystems were mapped into N =
NA + NB channels with a random mixture matrix. The anti-symmetrized cross-correlation
matrices read
DA (? )
0
D(? ) = A
A?
(14)
0
DB (? )
where A is a random real-valued N ? N matrix, and DA (? ) (DB (? )), with ? = 1...20, are
a set of random anti-symmetric NA ? NA (NB ? NB ) matrices. Note, that in this context,
? has no physical meaning.
As expected, we have found that if one of the subsystems is two-dimensional the respective block can always be diagonalized exactly for any number of ? s. We have also seen,
that the diagonalization procedure always perfectly separates the two subsystems even if
a diagonalization within a subsystem is not possible. A typical result for NA = 2 and
NB = 3 is presented in Fig.1. In the left panel we show the average of the absolute value
of correlation matrices before spatial mixing. In the middle panel we show the respective
result after random spatial mixture and subsequent demixing, and in the right panel we
show W1 A where W1 is the estimated real version of the demixing matrix as explained in
the preceding section. We note again, that also for the two-dimensional block, which can
always be diagonalized exactly, one can only recover the corresponding two-dimensional
subspace and not the source components themselves.
1
1
1
2
2
2
3
3
3
4
4
4
5
5
5
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Figure 1: Left: average of the absolute values of correlation matrices before spatial mixing;
middle: same after random spatial mixture and subsequent demixing; right: product of the
estimated demixing matrix and the true mixing matrix (W1 A). White indicates zero and
black the maximum value for each matrix.
3.2
Real MEG data
We applied our method to real data gathered in 93 MEG channels during triggered finger
movements of the right or left hand. We recall that for each interacting component we get
two results: a) the 2D subspace spanned by the two components and b) the diagonals of the
demixed system, say ?i?k (? ). To visualize the 2D subspace in a unique way we construct
from the two patterns of the k.th system, say x1 and x2 , the anti-symmetric outer product
Dk ? x1 xT2 ? x2 xT1
(15)
Indeed, the k.th subsystem contributes this matrix to the anti-symmetrized crosscorrelations D(? ) with varying amplitude for all ? .
The matrix Dk is now visualized as shown in Figs.3. The i.th row of Dk corresponds
to the interaction of the i.th channel to all others and this interaction is represented by
the contour-plot within the i.th circle located at the respective channel location. In this
example, the observed structure clearly corresponds to the interaction between eye-blinks
and visual cortex since occipital channels interact with channels close to the eyes and vice
versa.
In the upper panels of Fig.2 we show the corresponding temporal and spectral structures of
this interaction, represented by ?k (? ), and its Fourier transform, respectively. We observe
in the temporal domain a peak at a delay around 120 ms (indicated by the arrow) which
corresponds well to the response time of the primary visual cortex to visual input.
In the lower panels of Fig.2 we show the temporal and spectral pattern of another interacting
component with a clear peak in the alpha range (10 Hz). The corresponding spatial pattern
(Fig.4) clearly indicates an interacting system in occipital-parietal areas.
1
1
0.8
0.8
power in a.u.
power in a.u.
0.6
0.4
0.2
0.6
0.4
0
0.2
?0.2
?0.4
0
200
400
600
time in msec
800
0
0
1000
10
20
30
frequency in Hz
40
50
10
20
30
frequency in Hz
40
50
1
1
power in a.u.
power in a.u.
0.8
0.5
0
0.6
0.4
0.2
?0.5
0
200
400
600
time in msec
800
0
0
Figure 2: Diagonals of demixed antisymmetric correlation matrices as a function of delay
? (left panels) and, after Fourier transformation, as a function of frequency (right panels).
Top: interaction of eye-blinks and visual cortex; bottom: interaction of alpha generators.
Figure 3: Spatial pattern corresponding to the interaction between eye-blinks and visual
cortex.
4
Conclusion
When analyzing interaction between brain sources from macroscopic measurements like
EEG/MEG it is important to distinguish physiologically reasonable patterns of interaction
and spurious ones. In particular, volume conduction effects make large parts of the cortex
seemingly interact although in reality such contributions are purely artifactual. Existing
BSS methods that have been used with success for artifact removal and for estimation of
brain sources will by construction fail when attempting to separate interacting i.e. nonindependent brain sources. In this work we have proposed a new BSS algorithm that uses
anti-symmetrized cross-correlation matrices and subsequent diagonalization and can thus
reliably extract meaningful interaction while ignoring all spurious effects. Experiments
using our interacting source analysis (ISA) reveal interesting relationships that are found
blindly, e.g. inferring a component that links both eyes with visual cortex activity in a
self-paced finger movement experiment. A more detailed look at the spectrum exhibits a
peak at the typing frequency, and, in fact going back to the original MEG traces, eye-blinks
were strongly coupled with the typing speed. This simple finding exemplifies that ISA
is a powerful new technique for analyzing dynamical correlations in macroscopic brain
measurements.
Future studies will therefore apply ISA to other neurophysiological paradigms in order
to gain insights into the coherence and synchronicity patterns of cortical dynamics. It is
especially of high interest to explore the possibilities of using true brain interactions as
revealed by the imaginary part of cross-spectra as complementing information to improve
the performance of brain computer interfaces.
Acknowledgements. We thank G. Curio for valuable discussions. This work was supported in part by the IST Programme of the European Community, under PASCAL Network
Figure 4: Spatial pattern corresponding to the interaction between alpha generators.
of Excellence, IST-2002-506778 and the BMBF in the BCI III project (grant 01BE01A).
This publication only reflects the author?s views.
References
[1] A. Hyvarinen, J. Karhunen, and E. Oja. Independent Component Analysis. Wiley,
2001.
[2] V.K. Jirsa. Connectivity and dynamics of neural information processing. Neuroinformatics, (2):183?204, 2004.
[3] Frank Meinecke, Andreas Ziehe, J?urgen Kurths, and Klaus-Robert M?uller. Measuring
Phase Synchronization of Superimposed Signals. Physical Review Letters, 94(8), 2005.
[4] G. Nolte, O. Bai, L. Wheaton, Z. Mari, S. Vorbach, and M. Hallet. Identifying true
brain interaction from eeg data using the imaginary part of coherency. Clinical Neurophysiology, 115:2292?2307, 2004.
[5] A. Pikovsky, M. Rosenblum, and J. Kurths. Synchronization ? A Universal Concept in
Nonlinear Sciences. Cambridge University Press, 2001.
[6] W. Singer. Striving for coherence. Nature, 397(6718):391?393, Feb 1999.
[7] A. Yeredor, A. Ziehe, and K.-R. M?uller. Approximate joint diagonalization using a
natural-gradient approach. In Carlos G. Puntonet and Alberto Prieto, editors, Lecture Notes in Computer Science, volume 3195, pages 89?96, Granada, 2004. SpringerVerlag. Proc. ICA 2004.
[8] A. Ziehe and K.-R. M?uller. TDSEP ? an efficient algorithm for blind separation using
time structure. In L. Niklasson, M. Bod?en, and T. Ziemke, editors, Proceedings of the
8th International Conference on Artificial Neural Networks, ICANN?98, Perspectives
in Neural Computing, pages 675 ? 680, Berlin, 1998. Springer Verlag.
| 2781 |@word neurophysiology:1 version:2 middle:2 tedious:1 decomposition:1 covariance:3 bai:1 configuration:1 initial:3 existing:2 imaginary:8 diagonalized:5 ida:1 mari:1 si:1 activation:1 must:2 stemming:1 synchronicity:1 subsequent:5 plot:1 guess:3 complementing:1 vanishing:3 short:1 contribute:3 location:1 constructed:1 consists:4 ziemke:1 autocorrelation:1 introduce:1 excellence:1 pairwise:1 indeed:3 expected:1 ica:3 themselves:2 nor:1 yeredor:1 multi:1 brain:22 decomposed:1 actual:1 electroencephalography:1 becomes:1 project:1 underlying:1 panel:7 what:1 suppresses:1 proposing:1 finding:1 transformation:5 temporal:4 exactly:3 originates:1 unit:1 grant:1 before:3 engineering:1 consequence:1 analyzing:4 fluctuation:1 approximately:2 black:1 therein:1 range:2 unique:1 practice:2 block:6 procedure:3 nonindependent:1 idk:1 strasse:1 area:1 universal:1 get:2 close:1 subsystem:12 nb:5 context:1 impossible:1 demonstrated:1 regardless:1 occipital:2 resolution:2 formulate:1 simplicity:1 identifying:1 dipole:1 insight:1 spanned:4 handle:1 construction:2 heavily:1 guido:1 distinguishing:2 us:4 element:3 located:1 puntonet:1 observed:1 bottom:1 thousand:1 calculate:1 ordering:1 movement:2 valuable:1 ideally:2 dynamic:2 technically:1 purely:1 joint:1 represented:3 various:1 finger:2 separated:2 artificial:1 klaus:3 neuroinformatics:1 larger:1 valued:9 say:3 bci:1 statistic:1 analyse:1 transform:1 seemingly:3 triggered:1 interaction:25 product:3 tu:1 mixing:7 achieve:2 bod:1 exploiting:1 tdsep:2 requirement:1 derive:1 ac:2 measured:1 ij:1 eq:1 c:1 implies:1 come:1 concentrate:1 direction:1 drawback:3 require:2 hx:1 around:1 visualize:1 uniqueness:1 estimation:1 proc:1 superposition:3 sensitive:1 vice:1 successfully:1 reflects:1 uller:3 clearly:2 sensor:1 always:5 aim:1 rather:2 avoid:1 varying:2 publication:1 exemplifies:1 properly:1 rank:1 superimposed:2 indicates:2 inst:1 bebel:1 typically:1 spurious:5 going:1 fhg:1 interested:1 germany:3 pascal:1 denoted:1 spatial:12 special:1 urgen:1 equal:1 field:1 construct:2 look:2 constitutes:1 future:1 others:1 randomly:1 oja:1 simultaneously:1 jirsa:1 individual:1 phase:4 consisting:1 huge:1 interest:1 possibility:1 truly:3 mixture:4 accurate:1 experience:1 respective:6 orthogonal:1 desired:1 circle:1 minimal:1 column:7 measuring:1 kekul:1 delay:4 characterize:2 conduction:6 combined:1 density:1 fundamental:4 peak:3 international:1 retain:1 meinecke:2 off:1 invertible:1 na:5 w1:10 again:1 reflect:1 connectivity:1 successively:1 choose:1 rosenblum:1 de:1 explicitly:1 blind:2 later:1 view:1 linked:1 doing:1 analyze:1 recover:1 carlos:1 synchrony:1 contribution:1 largely:1 correspond:1 identify:1 gathered:1 blink:4 modelled:1 simultaneous:3 frequency:5 gain:1 recall:1 amplitude:2 back:1 response:1 formulation:1 strongly:1 generality:1 furthermore:2 just:2 stage:1 correlation:13 hand:1 nonlinear:1 artifact:3 indicated:1 reveal:1 effect:4 concept:2 true:8 contain:1 adequately:1 hence:6 read:1 symmetric:8 white:1 during:1 self:1 uniquely:1 coincides:1 m:1 generalized:1 trying:1 interpreting:1 interface:1 estr:1 meaning:1 instantaneous:1 recently:1 common:1 niklasson:1 physical:2 volume:8 discussed:1 interpretation:1 measurement:5 significant:1 versa:1 cambridge:1 trivially:1 outlined:1 reachable:1 cortex:7 feb:1 own:1 recent:1 perspective:1 irrelevant:1 apart:2 discard:1 verlag:1 success:1 muller:1 devise:1 seen:1 care:1 preceding:1 determine:1 paradigm:1 converge:1 recommended:1 signal:6 hsi:1 upside:1 violate:1 isa:6 infer:1 clinical:1 believed:1 cross:13 alberto:1 paired:1 expectation:1 blindly:1 singular:2 source:42 macroscopic:3 w2:4 meaningless:3 hz:3 db:2 noting:1 ideal:1 revealed:1 iii:1 independence:4 nolte:3 perfectly:1 andreas:2 det:1 expression:1 cji:1 ignored:1 clear:1 detailed:1 amount:1 transforms:1 band:1 visualized:1 crosscorrelations:1 millisecond:1 coherency:1 estimated:5 write:1 ist:2 key:1 neither:1 merely:1 xt2:1 inverse:1 letter:1 powerful:1 franklinstr:1 reasonable:2 separation:3 coherence:2 submatrix:1 conjugation:1 distinguish:2 paced:1 activity:5 constraint:1 x2:2 software:1 fourier:3 speed:1 argument:1 span:4 attempting:1 according:3 combination:1 s1:1 explained:1 invariant:2 taken:1 mutually:1 diagonalizes:2 turn:1 discus:1 fail:2 singer:1 prerequisite:1 apply:1 observe:1 spectral:3 symmetrized:6 original:1 hampered:1 top:1 cf:1 exploit:1 giving:1 especially:1 quantity:1 restored:1 primary:1 diagonal:10 exhibit:1 gradient:3 subspace:5 separate:4 mapped:1 berlin:4 simulated:4 separating:1 outer:2 link:1 thank:1 topic:1 prieto:1 assuming:1 meg:10 relationship:1 equivalently:1 cij:1 robert:2 frank:2 trace:1 anal:1 reliably:2 unknown:1 upper:1 discarded:1 anti:12 parietal:1 interacting:29 arbitrary:2 august:1 community:1 pair:3 connection:1 optimized:1 coherent:1 potsdam:2 established:1 below:2 pattern:11 dynamical:1 power:5 suitable:1 natural:2 typing:2 improve:1 technology:1 eye:6 demixed:2 realvalued:1 fraunhofer:1 coupled:2 extract:1 review:1 acknowledgement:1 removal:1 synchronization:4 loss:1 expect:1 lecture:1 interesting:3 generator:2 editor:2 systematically:1 uncorrelated:1 granada:1 row:2 course:2 supported:1 understand:2 wide:1 rhythmic:1 absolute:2 distributed:1 bs:8 dimension:1 cortical:1 contour:1 author:1 preprocessing:1 programme:1 far:1 hyvarinen:1 approximate:2 observable:1 alpha:3 global:1 xt1:1 assumed:3 spectrum:5 physiologically:2 search:1 reality:1 nature:2 channel:13 ignoring:1 symmetry:1 eeg:6 contributes:1 interact:5 necessarily:2 complex:7 european:1 domain:2 diag:2 antisymmetric:2 da:2 icann:1 arrow:1 s2:1 noise:3 arise:2 x1:2 neuronal:2 fig:5 en:1 wiley:1 bmbf:1 inferring:1 msec:2 candidate:1 down:1 specific:1 dk:3 striving:1 demixing:9 consist:1 exists:1 curio:1 importance:1 diagonalization:9 karhunen:1 suited:2 artifactual:1 simply:1 saddle:1 explore:1 neurophysiological:1 visual:6 springer:1 corresponds:4 goal:4 identity:1 magnetoencephalography:1 springerverlag:1 typical:3 conductor:1 called:2 meaningful:1 ziehe:4 latter:1 violated:1 dept:1 |
1,961 | 2,782 | Temporally changing synaptic plasticity
4
Minija Tamosiunaite1,2 , Bernd Porr3 , and Florentin W?org?otter1,4
1
Department of Psychology, University of Stirling
Stirling FK9 4LA, Scotland
2
Department of Informatics, Vytautas Magnus University
Kaunas, Lithuania
3
Department of Electronics & Electrical Engineering, University of Glasgow
Glasgow, GT12 8LT, Scotland
Bernstein Centre for Computational Neuroscience, University of Go? ttingen, Germany
{minija,worgott}@cn.stir.ac.uk; [email protected]
Abstract
Recent experimental results suggest that dendritic and back-propagating
spikes can influence synaptic plasticity in different ways [1]. In this study
we investigate how these signals could temporally interact at dendrites
leading to changing plasticity properties at local synapse clusters. Similar to a previous study [2], we employ a differential Hebbian plasticity
rule to emulate spike-timing dependent plasticity. We use dendritic (D-)
and back-propagating (BP-) spikes as post-synaptic signals in the learning rule and investigate how their interaction will influence plasticity. We
will analyze a situation where synapse plasticity characteristics change in
the course of time, depending on the type of post-synaptic activity momentarily elicited. Starting with weak synapses, which only elicit local
D-spikes, a slow, unspecific growth process is induced. As soon as the
soma begins to spike this process is replaced by fast synaptic changes as
the consequence of the much stronger and sharper BP-spike, which now
dominates the plasticity rule. This way a winner-take-all-mechanism
emerges in a two-stage process, enhancing the best-correlated inputs.
These results suggest that synaptic plasticity is a temporal changing process by which the computational properties of dendrites or complete neurons can be substantially augmented.
1
Introduction
The traditional view on Hebbian plasticity is that the correlation between pre- and postsynaptic events will drive learning. This view ignores the fact that synaptic plasticity is driven
by a whole sequence of events and that some of these events are causally related. For example, usually through the synaptic activity at a cluster of synapses the postsynaptic spike
will be triggered. This signal can then travel retrogradely into the dendrite (as a so-called
back-propagating- or BP-spike, [3]), leading to a depolarization at this and other clusters
of synapses by which their plasticity will be influenced. More locally, something similar
can happen if a cluster of synapses is able to elicit a dendritic spike (D-spike, [4, 5]), which
may not travel far, but which certainly leads to a local depolarization ?under? these and
adjacent synapses, triggering synaptic plasticity of one kind or another. Hence synaptic
plasticity seems to be to some degree influenced by recurrent processes. In this study, we
will use a differential Hebbian learning rule [2, 6] to emulate spike timing dependent plasticity (STDP, [7, 8]). With one specifically chosen example architecture we will investigate
how the temporal relation between dendritic- and back propagating spikes could influence
plasticity. Specifically we will report how learning could change during the course of network development, and how that could enrich the computational properties of the affected
neuronal compartments.
Figure 1: Basic learning scheme with x1 , ..., xn representing inputs to cluster 1, hAM P A ,
? DS , hBP - filters shaping D
hN M DA - filters shaping AMPA and NMDA signals, hDS , h
and BP-spikes, q1 , q2 - differential thresholds, ? - a delay. Weight impact is saturated. Only
the first of m clusters is shown explicitly; clusters 2, 3, ..., m would be employing the same
BP spike (not shown). The symbol ? represents a summation node and ? multiplication.
2
The Model
A block diagram of the model is shown in Fig. 1. The model includes several clusters of
synapses located on dendritic branches. Dendritic spikes are elicited following the summation of several AMPA signals passing threshold q1 . NMDA receptor influence on dendritic
spike generation was not considered as the contribution of NMDA potentials to the total membrane potential is substantially smaller than that of AMPA channels at a mixed
synapse.
Inputs to the model arrive in groups, but each input line gets only one pulse in a given
group (Fig. 2 C). Each synaptic cluster is limited to generating one dendritic spike from
one arriving pulse group. Cell firing is not explicitly modelled but said to be achieved
when the summation of several dendritic spikes at the cell soma has passed threshold q 2 .
This leads to a BP-spike. Progression of signals along a dendrite is not modelled explicitly,
but expressed by means of delays. Since we do not model biophysical processes, all signal
shapes are obtained by appropriate filters h, where u = x ? h is the convolution of spike
train x with filter h.
A differential Hebbian-type learning rule is used to drive synaptic plasticity [2, 6] with
?? = ?uv,
? where ? denotes synaptic weight, u stands for the synaptic input, v for the
output, and ? for the learning rate. see e.g.; u and v? annotations in Fig. 1, top left.
NMDA signals are used as the pre-synaptic signals, dendritic spikes, or dendritic spikes
complemented by back-propagating spikes, define the post-synaptic signals for the learning
rule. In addition, synaptic weights were sigmoidally saturated with limits zero and one.
Filter shapes forming AMPA and NMDA channel responses, as well as back- propagating
spikes and some forms of dendritic spikes used in this study were described by:
h(t) =
e?2?t/? ? e?8?t/?
6?/?
(1)
where ? determines the total duration of the pulse. The ratio between rise and fall time is
1 : 4. We use for AMPA channels: ? = 6 ms, for NMDA channels: ? = 120 ms, for
dendritic spikes: ? = 235 ms, and for BP-spikes: ? = 40 ms.
Note, we are approximating the NMDA characteristic by a non-voltage dependent filter
function. In conjunction with STDP, this simplification is justified by Saudargiene et al
[2, 9], showing that voltage dependency induces only a second-order effect on the shape of
the STDP curve.
Individual input timings are drawn from a uniform distribution from within a pre-specified
interval which can vary under different conditions. We distinguish three basic input groups:
strongly correlated inputs (several inputs over an interval of up to 10 ms), less correlated
(dispersed over an interval of 10-100 ms) and uncorrelated (dispersed over the interval of
more than 100 ms).
Figure 2: Example STDP curves (A,B), input pulse distribution (C), and model setup (D).
A) STDP curve obtained with a D-spike using Eq. 1 with ? = 235 ms, B) from a BP
spike with ? = 40 ms. C) Example input pulse distribution for two pulse groups. D)
Model neuron with two dendritic branches (left and right), consisting of two sub-branches
which get inputs X or Y , which are similar for either side. DS stands for D-spike, BP for
a BP-spike.
3
3.1
Results
Experimental setup
Fig. 2 A,B shows two STDP curves, one obtained with a wide D-spike the other one with
a much sharper BP-spike. The study investigates interactions of such post-synaptic signals in time. Though the signals interact linearly, the much stronger BP signal dominates
learning when elicited. In the absence of a BP spike the D-spike dominates plasticity. This
seems to correspond to new physiological observations concerning the relations between
post-synaptic signals and the actually expressed form of plasticity [10]. We specifically
investigate a two-phase processes, where plasticity is first dominated by the D- spike and
later by a BP-spike.
Fig. 2 D shows a setup in which two-phase plasticity could arise. We assume that inputs to
compact clusters of synapses are similar (e.g. all left branches in Fig. 2 D) but dissimilar
over larger distances (between left and right branches). First, e.g. early in development,
synapses may be weak and only the conjoint action of many synchronous inputs will lead to
a local D-spike. Local plasticity from these few D-spikes (indicated by the circular arrow
under the dendritic branches in Fig. 2) strengthens these synapses and at some point Dspikes are elicited more reliably at conjoint branches. This could finally also lead to spiking
at the soma and, hence, to a BP-spike, changing plasticity of the individual synapses.
To emulate such a multi-cluster system we actually model only one left and one right
branch. Plasticity in both branches is driven by D-spikes in the first part of the experiment. Assuming that at some point the cell will be driven into spiking, a BP-spike is added
after several hundred pulse groups (second part of the experiment).
Figure 3: Temporal weight development for the setup shown in Fig 2 with one sub-branch
for the driving cluster (A), and one for the non-driving cluster (B). Initially all weights
grow gradually until the driving cluster leads to a BP-spike after 200 pulse groups. Thus
only the weights of its group x1 ? x3 will continue to grow, now at an increased rate.
3.2
An emerging winner-take-all mechanism
In Fig. 3 we have simulated two clusters each with nine synapses. For both clusters, we
assume that the input activity for three synapses is closely correlated and that they occur
in a temporal interval of 6 ms (group x, y: 1 ? 3). Three other inputs are wider dispersed
(interval of 35 ms, group x, y: 4?6) and the three remaining ones arrive uncorrelated in an
interval of 150 ms (group x, y: 7 ? 9). The activity of the second cluster is determined by
the same parameters. Pulse groups arriving at the second cluster, however, were randomly
shifted by maximally ?20 ms relative to the centre of the pulse group of the first cluster.
All synapses start with weights 0.5, which will not suffice to drive the soma of the cell
into spiking. Hence initially plasticity can only take place by D-spikes, and we assume
that D-spikes will not reach the other cluster. Hence, learning is local. The wide D-spike
leads to a broad learning curve which has a span of about ?17.5ms around zero, covering
the dispersion of input groups 1 ? 3 as well as 4 ? 6. Furthermore it has a slightly bigger
area under the LTP part as compared to the LTD part. As a consequence, in both diagrams
(Fig. 3 A,B) we see that all weights 1 ? 6 grow, only for the least correlated input 6 ? 9
the weights remain close their origin. The correlated group 1 ? 3, however, benefits most
strongly, because it is more likely that a D-spike will be elicited by this group than by any
other combination.
Conjoint growth at a whole cluster of such synapses would at some point drive the cell into
somatic firing. Here we just assume that this happens for one cluster (Fig. 3 A) at a certain
time point. This can, for example, be the case when the input properties of the two input
groups are different leading to (slightly) less weight growth in the other cluster. As soon as
this happens a BP-spike is triggered and the STDP curve takes a narrow shape similar to
that in Fig. 2 B now strongly enhancing all causally driving synapses, hence group x 1 ? x3
(Fig. 3 A). This group grows at an increased rate while all other synapses shrink. Hence,
in general this system exhibits two-phase plasticity. This result was reproduced in a model
with 100 synapses in each input group (data not shown) and in the next sections we will
show that a system with two growth phases is rather robust against parameter variations.
Figure 4: Robustness of the observed effects. Plotted are the average weights of the less
correlated group (ordinate) against the correlated group (abscissa). Simulation with three
correlated and three less correlated inputs, for AMPA: ? = 6 ms, for NMDA: ? = 117 ms,
for D-spike: ? = 235 ms, for BP-spike: ? = 6 ? 66 ms, q1 = 0.14. D/BP spike amplitude
relation from 1/1.5 to 1/15, depending on BP-spike width, and keeping the area under the
BP-spike constant, ? = 0.2. For further explanation see text.
3.3
Robustness
This system is not readily suited for analytical investigation like the simpler ones in [9].
However, a fairly exhaustive parameter analysis is performed. Fig. 4 shows a plot of 350
experiments with the same basic architecture, using only one synapse cluster and the same
chain of events as before but with different parameter settings. Only ?strong correlated?
(< 10 ms) and ?less correlated? (10 ? 100 ms) inputs were used in this experiment. Each
point represents one experiment consisting of 600 pulse groups. On the abscissa we plot
the average weight of the three correlated synapses; on the ordinate the average weight of
the three less correlated synapses after these 600 pulse groups. We assume, as in the last
experiment, that a BP-spike is triggered as soon as q2 is passed, which happens around
pulse group 200 in all cases.
Four parameters were varied to obtain this plot. (1) The width of the BP-spike was varied
between 5 ms and 50 ms. (2) The interval width for the temporal distribution of the three
correlated spikes was varied between 1 ms and 10 ms. Hence 1 ms amounts to three
synchronously elicited spikes. (3) The interval width for the temporal distribution of the
three less correlated spikes was varied between 1 ms and 100 ms. (4) The shift of the
BP-spike with respect to the beginning of the D-spike was varied in an interval of ?80 ms.
Mainly parameters 3 and 4 have an effect on the results. The first parameter, BP spike
width, shows some small interference with the spike shift for the widest spikes. The second
parameter has almost no influence, due to the small parameter range (10 ms). Symbol
coding is used in Fig. 4 to better depict the influence of parameters 3 and 4 in their different
ranges. Symbols ?dots?, ?diamonds? and ?others? (circles and plusses) refer to a BP-spike
shifts: of less than ?5 ms (dots), between ?5 ms and +5 ms (diamonds) and larger
than +5 ms (circles and pluses). Circles in the latter region show cases with the less
correlated dispersion interval below 40 ms, and plusses the cases of the dispersion 40 ms
or higher. The ?dot? region (?5 ms) shows cases where correlated synapses will grow,
while less correlated synapses can grow or shrink. This happens because the BP spike
is too early to influence plasticity in the strongly correlated group, which will grow by
the DS-mechanism only, but the BP-spike still falls in the dispersion range of the less
correlated group, influencing its weights. At a shift of ?5 ms a fast transition in the weight
development occurs. The reason for this transition is that the BP-spike, being very close
to the D-spike, overrules the effect of the D-spike. The randomness whether the input falls
into pre- or post-output zone in both, correlated and less correlated, groups is large enough,
and leads to weights staying close to origin or to shrinkage. The circles and plusses encode
the dispersion of the wide, less correlated spike distributions in the case when time shifts of
the BP-spike are positive (> 5 ms, hence BP-spike after D-spike). Dispersions are getting
wider essentially from top to bottom (circle to dot). Clearly this shows that there are many
cases corresponding to the example depicted in Fig. 3 (horizontal tail of Fig. 4 A), but there
are also many conventional situations, where both weight-groups just grow in a similar way
(diagonal).
The data points show a certain regularity when the BP spike shift moves from big values
towards the borderline of +5 ms, where the weights stop to grow. For big shifts, points
cluster on the upper, diagonal tail in or near the dot region. With a smaller BP spike shift
points move up this tail and then drop down to the horizontal tail, which occurs for shifts
of about 20 ms. This pattern is typical for the bigger dispersion in the range of 20 ? 60 ms
and data points essentially follow the circle drawn in the figure.
This happens because as soon as the BP-spike gets closer to the D-spike, it will start to
exert its influence. But this will first only affect the less correlated group as there are
almost always some inputs so late that they ?collide? with the BP-spike. Time of collision,
however, is random and sometimes these input are ?pre? while sometimes they are ?post?
with respect to the BP-spike. Hence LTP and LTD will be essentially balanced in the less
correlated group, leading on average to zero weight growth. This effect is most pronounced
when the less correlated group has an intermediate dispersion (see the circles from the
upper tail dropping to the lower tail in the range of dispersions 20 ? 40 ms ), while it does
not occur if the dispersion of correlated and less correlated groups are similar (1 ? 20 ms).
Furthermore, the clear separation into the top- (circles, 1?40 ms) and bottom-tail (plusses,
61 ? 100 ms) indicates that it is possible to let the parameters drift quite a bit without
leaving the respective regions. Hence, while the moment-to-moment weight growth might
change, the general pattern will stay the same.
4
Discussion
Just like with the famous Baron von M?unchausen, who was able to pull himself out of a
swamp by his own hair, the current study suggests that plasticity change as a consequence
of itself might lead to specific functional properties. In order to arrive at this conclusion,
we have used a simplified model of STDP and combined it with a custom designed and
also simplified dendritic architecture. Hence, can the conclusions of this study be valid and
where are the limitations? We believe that answer to the first question is affirmative because
the degree of abstraction used in this model and the complexity of the results match. This
model never attempted to address the difficult issues of the biophysics of synaptic plasticity
(for a discussion see [2]) and it was also not our goal to investigate the mechanisms of signal
propagation in a dendrite [11]. Both aspects had been reduced to a few basic descriptors
and this way we were able to show for the first time that a useful synaptic selection process
can develop over time. The system consisted of a first ?pre-growth? phase (until the BPspike sets in) followed by a second phase where only one group of synapses grows strongly,
while the others shrink again. In general this example describes a scenario where groups
of synapses first undergo less selective classical Hebbian-like growth, while later more
pronounced STDP sets in, selecting only the main driving group. We believe that in the
early development of a real brain such a two-phase system might be beneficial for the
stable selection of those synapses that are better correlated. It is conceivable that at early
developmental stages correlations are in general weaker, while the number of inputs to a
cell is probably much higher than in the adult stage, where many have been pruned. Hence
highly selective and strong STDP-like plasticity employed too early might lead to a noiseinduced growth of ?the wrong? synapses. This, however, might be prevented by just such
a soft pre-selection mechanisms which would gradually drive clusters of synapses apart by
a local dendritic process before the stronger influence of the back-propagating spike sets
in. This is supported by recent results from Holthoff et al [1, 12], who have shown that Dspikes will lead to a different type of plasticity than BP-spikes in layer 5 pyramidal cells in
mouse cortex. Many more complications exist, for example the assumed chain of events of
D- and BP-spikes may be very different in different neurons and the interactions between
these signals may be far more non-linear (but see [10]). This will require to re-address
these issues in greater detail when dealing with a specific given neuron but the general
conclusions about the self-influencing and local [2, 13] character of synaptic plasticity and
their possible functional use should hopefully remain valid.
5
Acknowledgements
The authors acknowledge the support from SHEFC INCITE and IBRO. We are grateful to
B. Graham, L. Smith and D. Sterratt for their helpful comments on this work. The authors
wish to especially express their thanks to A. Saudargiene for her help at many stages in this
project.
References
[1] K. Holthoff, Y. Kovalchuk, R. Yuste, and A. Konnerth. Single-shock plasticity induced by local dendritic spikes. In Proc. G?ottingen NWG Conference, page 245B,
2005.
[2] A. Saudargiene, B. Porr, and F. W?org?otter. How the shape of pre- and postsynaptic
signals can influence STDP: a biophysical model. Neural Comp., 16:595?626, 2004.
[3] N.L. Golding, W. L. Kath, and N. Spruston. Dichotomy of action-potential backpropagation in ca1 pyramidal neuron dendrites. J Neurophysiol., 86:2998?3010, 2001.
[4] M. E. Larkum, J. J. Zhu, and B. Sakmann. Dendritic mechanisms underlying the
coupling of the dendritic with the axonal action potential initiation zone of adult rat
layer 5 pyramidal neurons. J. Physiol. (Lond. ), 533:447?466, 2001.
[5] N. L. Golding, P. N. Staff, and N. Spurston. Dendritic spikes as a mechanism for
cooperative long-term potentiation. Nature, 418:326?331, 2002.
[6] B. Porr and F. W?org?otter. Isotropic sequence order learning. Neural Comp., 15:831?
864, 2003.
[7] J. C. Magee and D. Johnston. A synaptically controlled, associative signal for
Hebbian plasticity in hippocampal neurons. Science, 275:209?213, 1997.
[8] H. Markram, J. L?ubke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy
by coincidence of postsynaptic APs and EPSPs. Science, 275:213?215, 1997.
[9] A. Saudargiene, B. Porr, and F. W?org?otter. Local learning rules: predicted influence
of dendritic location on synaptic modification in spike-timing-dependent plasticity.
Biol. Cybern., 92:128?138, 2005.
[10] H.-X. Wang, Gerkin R. C., D. W. Nauen, and G.-Q. Bi. Coactivation and timingdependent integration of synaptic potentiation and depression. Nature Neurosci.,
8:187?193, 2005.
[11] P. Vetter, A. Roth, and M. H?ausser. Propagation of action potentials in dendrites
depends on dendritic morphology. J. Neurophsiol., 85:926?937, 2001.
[12] K. Holthoff, Y. Kovalchuk, R. Yuste, and A. Konnerth. Single-shock LTD by local
dendritic spikes in pyramidal neurons of mouse visual cortex. J. Physiol., 560.1:27?
36, 2004.
[13] R. C. Froemke, M-m. Poo, and Y. Dan. Spike-timing-dependent synaptic plasticity
depends on dendritic location. Nature, 434:221?225, 2005.
| 2782 |@word stronger:3 seems:2 pulse:13 simulation:1 q1:3 moment:2 electronics:1 efficacy:1 selecting:1 current:1 readily:1 physiol:2 happen:1 plasticity:36 shape:5 plot:3 drop:1 depict:1 designed:1 aps:1 isotropic:1 beginning:1 scotland:2 smith:1 node:1 complication:1 location:2 org:4 simpler:1 along:1 differential:4 dan:1 minija:2 abscissa:2 multi:1 brain:1 morphology:1 begin:1 project:1 underlying:1 suffice:1 kind:1 substantially:2 depolarization:2 q2:2 emerging:1 affirmative:1 ca1:1 temporal:6 growth:9 wrong:1 uk:2 causally:2 before:2 positive:1 influencing:2 engineering:1 local:11 timing:5 limit:1 consequence:3 receptor:1 firing:2 might:5 plus:5 exert:1 suggests:1 limited:1 range:5 bi:1 coactivation:1 borderline:1 block:1 x3:2 backpropagation:1 area:2 elicit:2 pre:8 vetter:1 suggest:2 retrogradely:1 get:3 close:3 selection:3 influence:11 cybern:1 conventional:1 roth:1 poo:1 go:1 starting:1 duration:1 hds:1 glasgow:2 rule:7 pull:1 his:1 swamp:1 variation:1 origin:2 strengthens:1 located:1 holthoff:3 cooperative:1 observed:1 bottom:2 coincidence:1 electrical:1 wang:1 region:4 momentarily:1 balanced:1 ham:1 developmental:1 complexity:1 grateful:1 neurophysiol:1 collide:1 emulate:3 train:1 elec:1 fast:2 dichotomy:1 exhaustive:1 quite:1 larger:2 itself:1 reproduced:1 associative:1 sequence:2 triggered:3 biophysical:2 analytical:1 interaction:3 pronounced:2 getting:1 cluster:26 regularity:1 generating:1 staying:1 wider:2 depending:2 recurrent:1 ac:2 propagating:7 develop:1 help:1 coupling:1 eq:1 strong:2 epsps:1 predicted:1 closely:1 filter:6 require:1 potentiation:2 investigation:1 dendritic:25 summation:3 around:2 considered:1 magnus:1 stdp:11 driving:5 vary:1 early:5 proc:1 travel:2 clearly:1 always:1 worgott:1 rather:1 shrinkage:1 voltage:2 unspecific:1 conjunction:1 encode:1 indicates:1 mainly:1 helpful:1 dependent:5 abstraction:1 initially:2 her:1 relation:3 selective:2 germany:1 issue:2 development:5 enrich:1 integration:1 fairly:1 frotscher:1 never:1 represents:2 broad:1 ubke:1 report:1 others:2 employ:1 few:2 randomly:1 individual:2 replaced:1 phase:7 consisting:2 investigate:5 circular:1 highly:1 custom:1 certainly:1 saturated:2 gla:1 chain:2 konnerth:2 closer:1 respective:1 nwg:1 spruston:1 circle:8 plotted:1 re:1 increased:2 soft:1 stirling:2 uniform:1 hundred:1 delay:2 too:2 dependency:1 answer:1 combined:1 thanks:1 stay:1 informatics:1 mouse:2 von:1 again:1 hn:1 leading:4 potential:5 coding:1 includes:1 explicitly:3 depends:2 later:2 view:2 performed:1 analyze:1 start:2 elicited:6 annotation:1 contribution:1 stir:1 compartment:1 baron:1 descriptor:1 characteristic:2 who:2 correspond:1 weak:2 modelled:2 famous:1 comp:2 drive:5 randomness:1 synapsis:26 influenced:2 reach:1 synaptic:26 against:2 stop:1 emerges:1 shaping:2 nmda:8 amplitude:1 actually:2 back:7 higher:2 follow:1 response:1 maximally:1 synapse:4 though:1 strongly:5 shrink:3 furthermore:2 just:4 stage:4 correlation:2 d:3 until:2 horizontal:2 hopefully:1 propagation:2 indicated:1 believe:2 grows:2 vytautas:1 effect:5 consisted:1 hence:12 adjacent:1 during:1 width:5 self:1 covering:1 timingdependent:1 rat:1 m:45 hippocampal:1 complete:1 functional:2 spiking:3 winner:2 tail:7 refer:1 uv:1 gerkin:1 centre:2 ottingen:1 had:1 dot:5 stable:1 cortex:2 something:1 florentin:1 own:1 recent:2 ausser:1 driven:3 apart:1 scenario:1 certain:2 initiation:1 continue:1 greater:1 staff:1 employed:1 signal:18 branch:10 hebbian:6 match:1 long:1 concerning:1 post:7 prevented:1 bigger:2 biophysics:1 impact:1 controlled:1 basic:4 hair:1 enhancing:2 essentially:3 himself:1 ttingen:1 sometimes:2 achieved:1 cell:7 synaptically:1 justified:1 addition:1 interval:11 diagram:2 grow:8 leaving:1 pyramidal:4 johnston:1 probably:1 comment:1 induced:2 ltp:2 undergo:1 axonal:1 near:1 bernstein:1 intermediate:1 enough:1 affect:1 psychology:1 architecture:3 triggering:1 cn:1 shift:9 golding:2 synchronous:1 whether:1 passed:2 ltd:3 passing:1 nine:1 action:4 depression:1 useful:1 collision:1 fk9:1 clear:1 amount:1 locally:1 induces:1 reduced:1 exist:1 shifted:1 neuroscience:1 dropping:1 affected:1 express:1 group:36 four:1 soma:4 threshold:3 drawn:2 changing:4 shock:2 arrive:3 place:1 almost:2 separation:1 investigates:1 graham:1 bit:1 layer:2 followed:1 simplification:1 distinguish:1 activity:4 occur:2 bp:39 dominated:1 aspect:1 span:1 lond:1 pruned:1 department:3 combination:1 membrane:1 smaller:2 slightly:2 remain:2 postsynaptic:4 describes:1 beneficial:1 character:1 modification:1 happens:5 gradually:2 interference:1 mechanism:7 progression:1 appropriate:1 robustness:2 top:3 denotes:1 remaining:1 saudargiene:4 especially:1 widest:1 approximating:1 classical:1 hbp:1 move:2 added:1 question:1 spike:87 occurs:2 traditional:1 diagonal:2 said:1 exhibit:1 conceivable:1 distance:1 simulated:1 reason:1 assuming:1 ratio:1 setup:4 difficult:1 regulation:1 sharper:2 rise:1 reliably:1 sakmann:2 diamond:2 upper:2 neuron:8 convolution:1 observation:1 dispersion:10 acknowledge:1 situation:2 varied:5 somatic:1 synchronously:1 drift:1 ordinate:2 bernd:1 specified:1 narrow:1 address:2 able:3 adult:2 usually:1 below:1 pattern:2 explanation:1 event:5 zhu:1 representing:1 scheme:1 temporally:2 magee:1 text:1 acknowledgement:1 multiplication:1 relative:1 mixed:1 generation:1 limitation:1 yuste:2 conjoint:3 degree:2 uncorrelated:2 course:2 supported:1 last:1 soon:4 arriving:2 keeping:1 side:1 weaker:1 fall:3 wide:3 markram:1 benefit:1 curve:6 xn:1 stand:2 transition:2 valid:2 ignores:1 author:2 porr:4 simplified:2 far:2 employing:1 compact:1 dealing:1 otter:3 assumed:1 channel:4 nature:3 robust:1 dendrite:7 interact:2 ampa:6 froemke:1 da:1 main:1 linearly:1 arrow:1 whole:2 big:2 arise:1 neurosci:1 x1:2 augmented:1 neuronal:1 fig:17 sigmoidally:1 slow:1 sub:2 wish:1 late:1 down:1 specific:2 showing:1 symbol:3 physiological:1 dominates:3 nauen:1 suited:1 depicted:1 lt:1 likely:1 forming:1 visual:1 expressed:2 determines:1 dispersed:3 complemented:1 goal:1 towards:1 absence:1 change:5 specifically:3 determined:1 typical:1 called:1 total:2 experimental:2 la:1 attempted:1 zone:2 support:1 latter:1 dissimilar:1 biol:1 correlated:30 |
1,962 | 2,783 | Gaussian Process Dynamical Models
Jack M. Wang, David J. Fleet, Aaron Hertzmann
Department of Computer Science
University of Toronto, Toronto, ON M5S 3G4
{jmwang,hertzman}@dgp.toronto.edu, [email protected]
Abstract
This paper introduces Gaussian Process Dynamical Models (GPDM) for
nonlinear time series analysis. A GPDM comprises a low-dimensional
latent space with associated dynamics, and a map from the latent space
to an observation space. We marginalize out the model parameters in
closed-form, using Gaussian Process (GP) priors for both the dynamics
and the observation mappings. This results in a nonparametric model
for dynamical systems that accounts for uncertainty in the model. We
demonstrate the approach on human motion capture data in which each
pose is 62-dimensional. Despite the use of small data sets, the GPDM
learns an effective representation of the nonlinear dynamics in these
spaces. Webpage: http://www.dgp.toronto.edu/? jmwang/gpdm/
1
Introduction
A central difficulty in modeling time-series data is in determining a model that can capture
the nonlinearities of the data without overfitting. Linear autoregressive models require
relatively few parameters and allow closed-form analysis, but can only model a limited
range of systems. In contrast, existing nonlinear models can model complex dynamics, but
may require large training sets to learn accurate MAP models.
In this paper we investigate learning nonlinear dynamical models for high-dimensional
datasets. We take a Bayesian approach to modeling dynamics, averaging over dynamics
parameters rather than estimating them. Inspired by the fact that averaging over nonlinear
regression models leads to a Gaussian Process (GP) model, we show that integrating over
parameters in nonlinear dynamical systems can also be performed in closed-form. The
resulting Gaussian Process Dynamical Model (GPDM) is fully defined by a set of lowdimensional representations of the training data, with both dynamics and observation mappings learned from GP regression. As a natural consequence of GP regression, the GPDM
removes the need to select many parameters associated with function approximators while
retaining the expressiveness of nonlinear dynamics and observation.
Our work is motivated by modeling human motion for video-based people tracking and
data-driven animation. Bayesian people tracking requires dynamical models in the form
of transition densities in order to specify prediction distributions over new poses at each
time instant (e.g., [11, 14]); similarly, data-driven computer animation requires prior distributions over poses and motion (e.g., [1, 4, 6]). An individual human pose is typically
parameterized with more than 60 parameters. Despite the large state space, the space of
activity-specific human poses and motions has a much smaller intrinsic dimensionality; in
our experiments with walking and golf swings, 3 dimensions often suffice.
Our work builds on the extensive literature in nonlinear time-series analysis, of which we
A
x1
x2
x3
x4
y1
y2
y3
y4
X
B
(a)
(b)
Y
Figure 1: Time-series graphical models. (a) Nonlinear latent-variable model for time series. (Hyperparameters ?
? and ?? are not shown.) (b) GPDM model. Because the mapping
parameters A and B have been marginalized over, all latent coordinates X = [x1 , ..., xN ]T
are jointly correlated, as are all poses Y = [y1 , ..., yN ]T .
mention a few examples. Two main themes are the use of switching linear models (e.g.,
[11]), and nonlinear transition functions, such as represented by Radial Basis Functions
[2]. Both approaches require sufficient amounts of training data that one can learn the
parameters of the switching or basis functions. Determining the appropriate number of
basis functions is also difficult. In Kernel Dynamical Modeling [12], linear dynamics are
kernelized to model nonlinear systems, but a density function over data is not produced.
Supervised learning with GP regression has been used to model dynamics for a variety
of applications [3, 7, 13]. These methods model dynamics directly in observation space,
which is impractical for the high-dimensionality of motion capture data. Our approach
is most directly inspired by the unsupervised Gaussian Process Latent Variable Model
(GPLVM) [5], which models the joint distribution of the observed data and their corresponding representation in a low dimensional latent space. This distribution can then be
used as a prior for inference from new measurements. However, the GPLVM is not a dynamical model; it assumes that data are generated independently. Accordingly it does not
respect temporal continuity of the data, nor does it model the dynamics in the latent space.
Here we augment the GPLVM with a latent dynamical model. The result is a Bayesian
generalization of subspace dynamical models to nonlinear latent mappings and dynamics.
2
Gaussian Process Dynamics
The Gaussian Process Dynamical Model (GPDM) comprises a mapping from a latent space
to the data space, and a dynamical model in the latent space (Figure 1). These mappings
are typically nonlinear. The GPDM is obtained by marginalizing out the parameters of the
two mappings, and optimizing the latent coordinates of training data.
More precisely, our goal is to model the probability density of a sequence of vector-valued
states y1 ..., yt , ..., yN , with discrete-time index t and yt ? RD . As a basic model, consider
a latent-variable mapping with first-order Markov dynamics:
xt
yt
=
=
f (xt?1 ; A) + nx,t
g(xt ; B) + ny,t
(1)
(2)
Here, xt ? Rd denotes the d-dimensional latent coordinates at time t, nx,t and ny,t are
zero-mean, white Gaussian noise processes, f and g are (nonlinear) mappings parameterized by A and B, respectively. Figure 1(a) depicts the graphical model.
While linear mappings have been used extensively in auto-regressive models, here we consider the nonlinear case for which f and g are linear combinations of basis functions:
f (x; A) =
ai ?i (x)
(3)
i
g(x; B)
=
j
bj ?j (x)
(4)
for weights A = [a1 , a2 , ...] and B = [b1 , b2 , ...], and basis functions ?i and ?j . In order
to fit the parameters of this model to training data, one must select an appropriate number
of basis functions, and one must ensure that there is enough data to constrain the shape of
each basis function. Ensuring both of these conditions can be very difficult in practice.
However, from a Bayesian perspective, the specific forms of f and g ? including the
numbers of basis functions ? are incidental, and should therefore be marginalized out.
With an isotropic Gaussian prior on the columns of B, marginalizing over g can be done in
closed form [8, 10] to yield
|W|N
1 ?1
2 T
?
p(Y | X, ?) =
,
(5)
exp ? tr KY YW Y
2
(2?)N D |KY |D
where Y = [y1 , ..., yN ]T , KY is a kernel matrix, and ?? = {?1 , ?2 , ..., W} comprises the
kernel hyperparameters. The elements of kernel matrix are defined by a kernel function,
(KY )i,j = kY (xi , xj ). For the latent mapping, X ? Y, we currently use the RBF kernel
?2
kY (x, x ) = ?1 exp ? ||x ? x ||2 + ?3?1 ?x,x .
(6)
2
As in the SGPLVM [4], we use a scaling matrix W ? diag(w1 , ..., wD ) to account for
different variances in the different data dimensions. This is equivalent to a GP with kernel
2
function k(x, x )/wm
for dimension m. Hyperparameter ?1 represents the overall scale of
the output function, while ?2 corresponds to the inverse width of the RBFs. The variance
of the noise term ny,t is given by ?3?1 .
The dynamic mapping on the latent coordinates X is conceptually similar, but more subtle.1
As above, we form the joint probability density over the latent coordinates and the dynamics
weights A in (3). We then marginalize over the weights A, i.e.,
p(X | ?
?) =
p(X, A | ?
?) dA =
p(X | A, ?
?) p(A | ?
?) dA .
(7)
Incorporating the Markov property (Eqn. (1)) gives:
N
p(X | ?
?) = p(x1 )
p(xt | xt?1 , A, ?
?) p(A | ?
?) dA ,
(8)
t=2
where ?
? is a vector of kernel hyperparameters. Assuming an isotropic Gaussian prior on
the columns of A, it can be shown that this expression simplifies to:
1
1 ?1
T
,
(9)
exp ? tr KX Xout Xout
p(X | ?
?) = p(x1 )
2
(2?)(N ?1)d |KX |d
where Xout = [x2 , ..., xN ]T , KX is the (N ?1) ? (N ?1) kernel matrix constructed from
{x1 , ..., xN ?1 }, and x1 is assumed to be have an isotropic Gaussian prior.
We model dynamics using both the RBF kernel of the form of Eqn. (6), as well as the
following ?linear + RBF? kernel:
?
2
kX (x, x ) = ?1 exp ? ||x ? x ||2 + ?3 xT x + ?4?1 ?x,x .
(10)
2
The kernel corresponds to representing g as the sum of a linear term and RBF terms. The
inclusion of the linear term is motivated by the fact that linear dynamical models, such as
Conceptually, we would like to model each pair (xt , xt+1 ) as a training pair for regression with
g. However, we cannot simply substitute them directly into the GP model of Eqn. (5) as this leads to
the nonsensical expression p(x2 , ..., xN | x1 , ..., xN ?1 ).
1
first or second-order autoregressive models, are useful for many systems. Hyperparameters
?1 , ?2 represent the output scale and the inverse width of the RBF terms, and ?3 represents
the output scale of the linear term. Together, they control the relative weighting between
the terms, while ?4?1 represents the variance of the noise term nx,t .
It should be noted that, due to the nonlinear dynamical mapping in (3), the joint distribution
of the latent coordinates is not Gaussian. Moreover, while the density over the initial state
may be Gaussian, it will not remain Gaussian once propagated through the dynamics. One
can also see this in (9) since xt variables occur inside the kernel matrix, as well as outside
of it. So the log likelihood is not quadratic in xt .
?1
?
Finally, we also place priors on the hyperparameters ( p(?
?) ?
i ?i , and p(?) ?
?1
i ?i ) to discourage overfitting. Together, the priors, the latent mapping, and the dynamics define a generative model for time-series observations (Figure 1(b)):
? = p(Y|X, ?)
? p(X|?
? .
p(X, Y, ?
?, ?)
?) p(?
?) p(?)
(11)
Multiple sequences. This model extends naturally to multiple sequences Y1 , ..., YM .
Each sequence has associated latent coordinates X1 , ..., XM within a shared latent space.
For the latent mapping g we can conceptually concatenate all sequences within the GP
likelihood (Eqn. (5)). A similar concatenation applies for the dynamics, but omitting the
first frame of each sequence from Xout , and omitting the final frame of each sequence from
the kernel matrix KX . The same structure applies whether we are learning from multiple
sequences, or learning from one sequence and inferring another. That is, if we learn from
a sequence Y1 , and then infer the latent coordinates for a new sequence Y2 , then the joint
likelihood entails full kernel matrices KX and KY formed from both sequences.
Higher-order features. The GPDM can be extended to model higher-order Markov
chains, and to model velocity and acceleration in inputs and outputs. For example, a
second-order dynamical model,
xt = f (xt?1 , xt?2 ; A) + nx,t
(12)
may be used to explicitly model the dependence of the prediction on two past frames (or
on velocity). In the GPDM framework, the equivalent model entails defining the kernel
function as a function of the current and previous time-step:
?
?3
2
kX ( [xt , xt?1 ], [x? , x??1 ] ) = ?1 exp ? ||xt ? x? ||2 ?
||xt?1 ? x??1 ||2
2
2
(13)
+ ?4 xTt x? + ?5 xTt?1 x??1 + ?6?1 ?t,?
Similarly, the dynamics can be formulated to predict velocity:
vt?1 = f (xt?1 ; A) + nx,t
(14)
Velocity prediction may be more appropriate for modeling smoothly motion trajectories.
Using Euler integration with time-step ?t, we have xt = xt?1 + vt?1 ?t. The dynamics likelihood p(X | ?
?) can then be written by redefining Xout = [x2 ? x1 , ..., xN ?
xN ?1 ]T /?t in Eqn. (9). In this paper, we use a fixed time-step of ?t = 1. This is analogous to using xt?1 as a ?mean function.? Higher-order features can also be fused together
with position information to reduce the Gaussian process prediction variance [15, 9].
3
Properties of the GPDM and Algorithms
Learning the GPDM from measurements Y entails minimizing the negative log-posterior:
L
=
? ln p(X, ?
?, ?? | Y)
(15)
=
d
1
T
ln |KX | + tr K?1
ln ?j
X Xout Xout +
2
2
j
? N ln |W| +
(16)
D
1
2 T
+
ln |KY | + tr K?1
YW
Y
ln ?j
Y
2
2
j
up to an additive constant. We minimize L with respect to X, ?
?, and ?? numerically.
Figure 2 shows a GPDM 3D latent space learned from a human motion capture data comprising three walk cycles. Each pose was defined by 56 Euler angles for joints, 3 global
(torso) pose angles, and 3 global (torso) translational velocities. For learning, the data was
mean-subtracted, and the latent coordinates were initialized with PCA. Finally, a GPDM is
learned by minimizing L in (16). We used 3D latent spaces for all experiments shown here.
Using 2D latent spaces leads to intersecting latent trajectories. This causes large ?jumps?
to appear in the model, leading to unreliable dynamics.
For comparison, Fig. 2(a) shows a 3D SGPLVM learned from walking data. Note that
the latent trajectories are not smooth; there are numerous cases where consecutive poses
in the walking sequence are relatively far apart in the latent space. By contrast, Fig. 2(b)
shows that the GPDM produces a much smoother configuration of latent positions. Here
the GPDM arranges the latent positions roughly in the shape of a saddle.
Figure 2(c) shows a volume visualization of the inverse reconstruction variance, i.e.,
?2 ln ?y|x,X,Y,??. This shows the confidence with which the model reconstructs a pose
from latent positions x. In effect, the GPDM models a high probability ?tube? around
the data. To illustrate the dynamical process, Fig. 2(d) shows 25 fair samples from the
latent dynamics of the GPDM. All samples are conditioned on the same initial state, x0 ,
and each has a length of 60 time steps. As noted above, because we marginalize over the
weights of the dynamic mapping, A, the distribution over a pose sequence cannot be factored into a sequence of low-order Markov transitions (Fig. 1(a)). Hence, we draw fair
? (j) ? p(X
? 1:60 | x0 , X, Y, ?
samples X
?), using hybrid Monte Carlo [8]. The resulting
1:60
trajectories (Fig. 2(c)) are smooth and similar to the training motions.
3.1 Mean Prediction Sequences
For both 3D people tracking and computer animation, it is desirable to generate new motions efficiently. Here we consider a simple online method for generating a new motion,
called mean-prediction, which avoids the relatively expensive Monte Carlo sampling used
above. In mean-prediction, we consider the next timestep x
?t conditioned on x
?t?1 from the
Gaussian prediction [8]:
2
x
?t ? N (?X (?
xt?1 ); ?X
(?
xt?1 )I)
?X (x) = XTout K?1
X kX (x) ,
(17)
2
?X
(x) = kX (x, x) ? kX (x)T K?1
X kX (x)
(18)
where kX (x) is a vector containing kX (x, xi ) in the i-th entry and xi is the i training
vector. In particular, we set the latent position at each time-step to be the most-likely (mean)
point given the previous step: x
?t = ?X (?
xt?1 ). In this way we ignore the process noise that
one might normally add. We find that this mean-prediction often generates motions that are
more like the fair samples shown in Fig. 2(d), than if random process noise had been added
at each time step (as in (1)). Similarly, new poses are given by y
?t = ?Y (?
xt ).
th
Depending on the dataset and the choice of kernels, long sequences generated by sampling
or mean-prediction can diverge from the data. On our data sets, mean-prediction trajectories from the GPDM with an RBF or linear+RBF kernel for dynamics usually produce
sequences that roughly follow the training data (e.g., see the red curves in Figure 3). This
usually means producing closed limit cycles with walking data. We also found that meanprediction motions are often very close to the mean obtained from the HMC sampler; by
(a)
(b)
(d)
(c)
(e)
Figure 2: Models learned from a walking sequence of 2.5 gait cycles. The latent positions
learned with a GPLVM (a) and a GPDM (b) are shown in blue. Vectors depict the temporal
sequence. (c) - log variance for reconstruction shows regions of latent space that are reconstructed with high confidence. (d) Random trajectories drawn from the model using HMC
(green), and their mean (red). (e) A GPDM of walk data learned with RBF+linear kernel
dynamics. The simulation (red) was started far from the training data, and then optimized
(green). The poses were reconstructed from points on the optimized trajectory.
(a)
(b)
Figure 3: (a) Two GPDMs and mean predictions. The first is that from the previous figure.
The second was learned with a linear kernel. (b) The GPDM model was learned from 3
swings of a golf club, using a 2nd order RBF kernel for dynamics. The two plots show 2D
orthogonal projections of the 3D latent space.
initializing HMC with mean-prediction, we find that the sampler reaches equilibrium in a
small number of interations. Compared to the RBF kernels, mean-prediction motions generated from GPDMs with the linear kernel often deviate from the original data (e.g., see
Figure 3a), and lead to over-smoothed animation.
Figure 3(b) shows a 3D GPDM learned from three swings of a golf club. The learning
aligns the sequences and nicely accounts for variations in speed during the club trajectory.
3.2 Optimization
While mean-prediction is efficient, there is nothing in the algorithm that prevents trajectories from drifting away from the training data. Thus, it is sometimes desirable to optimize
a particular motion under the GPDM, which often reduces drift of the mean-prediction mo-
(a)
(b)
Figure 4: GPDM from walk sequence with missing data learned with (a) a RBF+linear
kernel for dynamics, and (b) a linear kernel for dynamics. Blue curves depict original data.
Green curves are the reconstructed, missing data.
tions. To optimize a new sequence, we first select a starting point x
?1 and a number of
? | X, ?
? is then optimized directly
time-steps. The likelihood p(X
?) of the new sequence X
(holding the latent positions of the previously learned latent positions, X, and hyperparameters, ?
?, fixed). To see why optimization generates motion close to the traing data, note
2
that the variance of pose x
?t+1 is determined by ?X
(?
xt ), which will be lower when x
?t is
nearer the training data. Consequently, the likelihood of x
?t+1 can be increased by moving
x
?t closer to the training data. This generalizes the preference of the SGPLVM for poses
similar to the examples [4], and is a natural consequence of the Bayesian approach. As an
example, Fig. 2(e) shows an optimized walk sequence initialized from the mean-prediction.
3.3 Forecasting
We performed a simple experiment to compare the predictive power of the GPDM to a
linear dynamical system, implemented as a GPDM with linear kernel in the latent space and
RBF latent mapping. We trained each model on the first 130 frames of the 60Hz walking
sequence (corresponding to 2 cycles), and tested on the remaining 23 frames. From each
test frame mean-prediction was used to predict the pose 8 frames ahead, and then the RMS
pose error was computed against ground truth. The test was repeated using mean-prediction
and optimization for three kernels, all based on first-order predictions as in (1):
mean-prediction
optimization
Linear
59.69
58.32
RBF
48.72
45.89
Linear+RBF
36.74
31.97
Due to the nonlinear nature of the walking dynamics in latent space, the RBF and Linear+RBF kernels outperform the linear kernel. Moreover, optimization (initialized by
mean-prediction) improves the result in all cases, for reasons explained above.
3.4 Missing Data
The GPDM model can also handle incomplete data (a common problem with human motion
capture sequences). The GPDM is learned by minimizing L (Eqn. (16)), but with the terms
corresponding to missing poses yt removed. The latent coordinates for missing data are
initialized by cubic spline interpolation from the 3D PCA initialization of observations.
While this produces good results for short missing segments (e.g., 10?15 frames of the
157-frame walk sequence used in Fig. 2), it fails on long missing segments. The problem
lies with the difficulty in initializing the missing latent positions sufficiently close to the
training data. To solve the problem, we first learn a model with a subsampled data sequence.
Reducing sampling density effectively increases uncertainty in the reconstruction process
so that the probability density over the latent space falls off more smoothly from the data.
We then restart the learning with the entire data set, but with the kernel hyperparameters
fixed. In doing so, the dynamics terms in the objective function exert more influence over
the latent coordinates of the training data, and a smooth model is learned.
With 50 missing frames of the 157-frame walk sequence, this optimization produces mod-
els (Fig. 4) that are much smoother than those in Fig. 2. The linear kernel is able to pull
the latent coordinates onto a cylinder (Fig. 4b), and thereby provides an accurate dynamical model. Both models shown in Fig. 4 produce estimates of the missing poses that are
visually indistinguishable from the ground truth.
4
Discussion and Extensions
One of the main strengths of the GPDM model is the ability to generalize well from small
datasets. Conversely, performance is a major issue in applying GP methods to larger
datasets. Previous approaches prune uninformative vectors from the training data [5]. This
is not straightforward when learning a GPDM, however, because each timestep is highly
correlated with the steps before and after it. For example, if we hold xt fixed during optimization, then it is unlikely that the optimizer will make much adjustment to xt+1 or xt?1 .
The use of higher-order features provides a possible solution to this problem. Specifically,
consider a dynamical model of the form vt = f (xt?1 , vt?1 ). Since adjacent time-steps
are related only by the velocity vt ? (xt ? xt?1 )/?t, we can handle irregularly-sampled
datapoints by adjusting the timestep ?t, possibly using a different ?t at each step.
A number of further extensions to the GPDM model are possible. It would be straightforward to include a control signal ut in the dynamics f (xt , ut ). It would also be interesting to
explore uncertainty in latent variable estimation (e.g., see [3]). Our use of maximum likelihood latent coordinates is motivated by Lawrence?s observation that model uncertainty
and latent coordinate uncertainty are interchangeable when learning PCA [5]. However, in
some applications, uncertainty about latent coordinates may be highly structured (e.g., due
to depth ambiguities in motion tracking).
Acknowledgements This work made use of Neil Lawrence?s publicly-available GPLVM code, the
CMU mocap database (mocap.cs.cmu.edu), and Joe Conti?s volume visualization code from mathworks.com. This research was supported by NSERC and CIAR.
References
[1] M. Brand and A. Hertzmann. Style machines. Proc. SIGGRAPH, pp. 183-192, July 2000.
[2] Z. Ghahramani and S. T. Roweis. Learning nonlinear dynamical systems using an EM algorithm. Proc. NIPS 11, pp. 431-437, 1999.
[3] A. Girard, C. E. Rasmussen, J. G. Candela, and R. Murray-Smith. Gaussian process priors with
uncertain inputs - application to multiple-step ahead time series forecasting. Proc. NIPS 15, pp.
529-536, 2003.
[4] K. Grochow, S. L. Martin, A. Hertzmann, and Z. Popovi?c. Style-based inverse kinematics. ACM
Trans. Graphics, 23(3):522-531, Aug. 2004.
[5] N. D. Lawrence. Gaussian process latent variable models for visualisation of high dimensional
data. Proc. NIPS 16, 2004.
[6] J. Lee, J. Chai, P. S. A. Reitsma, J. K. Hodgins, and N. S. Pollard. Interactive control of avatars
animated with human motion data. ACM Trans. Graphics, 21(3):491-500, July 2002.
[7] W. E. Leithead, E. Solak, and D. J. Leith. Direct identification of nonlinear structure using
Gaussian process prior models. Proc. European Control Conference, 2003.
[8] D. MacKay. Information Theory, Inference, and Learning Algorithms. 2003.
[9] R. Murray-Smith and B. A. Pearlmutter. Transformations of Gaussian process priors. Technical
Report, Department of Computer Science, Glasgow University, 2003
[10] R. M. Neal. Bayesian Learning for Neural Networks. Springer-Verlag, 1996.
[11] V. Pavlovi?c, J. M. Rehg, and J. MacCormick. Learning switching linear models of human
motion. Proc. NIPS 13, pp. 981-987, 2001.
[12] L. Ralaivola and F. d?Alch?e-Buc. Dynamical modeling with kernels for nonlinear time series
prediction. Proc. NIPS 16, 2004.
[13] C. E. Rasmussen and M. Kuss. Gaussian processes in reinforcement learning. Proc. NIPS 16,
2004.
[14] H. Sidenbladh, M. J. Black, and D. J. Fleet. Stochastic tracking of 3D human figures using 2D
motion. Proc. ECCV, volume 2, pp. 702-718, 2000.
[15] E. Solak, R. Murray-Smith, W. Leithead, D. Leith, and C. E. Rasmussen. Derivative observations in Gaussian process models of dynamic systems. Proc. NIPS 15, pp. 1033-1040, 2003.
| 2783 |@word sgplvm:3 nd:1 nonsensical:1 simulation:1 xout:7 thereby:1 mention:1 tr:4 pavlovi:1 initial:2 configuration:1 series:8 animated:1 past:1 existing:1 current:1 wd:1 com:1 must:2 written:1 concatenate:1 additive:1 shape:2 remove:1 plot:1 depict:2 generative:1 accordingly:1 isotropic:3 smith:3 short:1 regressive:1 provides:2 toronto:5 club:3 preference:1 constructed:1 direct:1 inside:1 g4:1 x0:2 roughly:2 nor:1 inspired:2 estimating:1 moreover:2 suffice:1 grochow:1 transformation:1 impractical:1 temporal:2 y3:1 interactive:1 control:4 normally:1 yn:3 appear:1 producing:1 before:1 leithead:2 limit:1 consequence:2 switching:3 despite:2 leith:2 interpolation:1 might:1 black:1 exert:1 initialization:1 conversely:1 limited:1 range:1 practice:1 x3:1 projection:1 confidence:2 integrating:1 radial:1 cannot:2 marginalize:3 close:3 onto:1 ralaivola:1 influence:1 applying:1 www:1 equivalent:2 map:2 optimize:2 yt:4 missing:10 straightforward:2 starting:1 independently:1 arranges:1 glasgow:1 factored:1 pull:1 datapoints:1 rehg:1 handle:2 coordinate:15 variation:1 analogous:1 avatar:1 element:1 velocity:6 expensive:1 walking:7 database:1 observed:1 wang:1 capture:5 initializing:2 region:1 cycle:4 removed:1 hertzmann:3 traing:1 dynamic:35 trained:1 interchangeable:1 segment:2 predictive:1 basis:8 alch:1 joint:5 siggraph:1 represented:1 effective:1 monte:2 outside:1 larger:1 valued:1 solve:1 ability:1 neil:1 gp:9 jointly:1 final:1 online:1 sequence:30 reconstruction:3 lowdimensional:1 gait:1 roweis:1 ky:8 interations:1 webpage:1 chai:1 produce:5 generating:1 tions:1 illustrate:1 depending:1 pose:19 aug:1 implemented:1 c:2 stochastic:1 human:9 require:3 generalization:1 extension:2 hold:1 around:1 sufficiently:1 ground:2 exp:5 visually:1 equilibrium:1 mapping:17 bj:1 predict:2 mo:1 lawrence:3 major:1 optimizer:1 consecutive:1 a2:1 estimation:1 proc:10 currently:1 gaussian:23 rather:1 dgp:2 likelihood:7 contrast:2 inference:2 el:1 typically:2 entire:1 unlikely:1 kernelized:1 visualisation:1 comprising:1 overall:1 translational:1 issue:1 augment:1 retaining:1 integration:1 mackay:1 once:1 nicely:1 sampling:3 x4:1 represents:3 unsupervised:1 report:1 spline:1 few:2 individual:1 subsampled:1 cylinder:1 investigate:1 highly:2 introduces:1 chain:1 accurate:2 closer:1 orthogonal:1 incomplete:1 hertzman:1 walk:6 initialized:4 uncertain:1 increased:1 column:2 modeling:6 entry:1 euler:2 graphic:2 density:7 lee:1 off:1 diverge:1 together:3 ym:1 fused:1 intersecting:1 w1:1 central:1 tube:1 ambiguity:1 reconstructs:1 containing:1 possibly:1 derivative:1 leading:1 style:2 account:3 nonlinearities:1 b2:1 explicitly:1 performed:2 closed:5 candela:1 doing:1 red:3 wm:1 rbfs:1 minimize:1 formed:1 publicly:1 variance:7 efficiently:1 yield:1 conceptually:3 generalize:1 bayesian:6 identification:1 produced:1 carlo:2 trajectory:9 m5s:1 kuss:1 reach:1 aligns:1 against:1 pp:6 naturally:1 associated:3 propagated:1 sampled:1 dataset:1 adjusting:1 ut:2 dimensionality:2 torso:2 improves:1 subtle:1 higher:4 popovi:1 supervised:1 follow:1 specify:1 done:1 eqn:6 nonlinear:20 continuity:1 omitting:2 effect:1 y2:2 swing:3 hence:1 neal:1 white:1 adjacent:1 indistinguishable:1 during:2 width:2 noted:2 demonstrate:1 pearlmutter:1 motion:20 jack:1 common:1 volume:3 numerically:1 measurement:2 ai:1 rd:2 similarly:3 inclusion:1 had:1 moving:1 entail:3 add:1 posterior:1 perspective:1 optimizing:1 driven:2 apart:1 verlag:1 vt:5 approximators:1 prune:1 mocap:2 signal:1 july:2 smoother:2 multiple:4 full:1 desirable:2 infer:1 reduces:1 smooth:3 technical:1 long:2 a1:1 ensuring:1 prediction:23 regression:5 basic:1 cmu:2 kernel:32 represent:1 sometimes:1 uninformative:1 hz:1 mod:1 enough:1 variety:1 xj:1 fit:1 reduce:1 simplifies:1 golf:3 ciar:1 fleet:3 whether:1 motivated:3 expression:2 pca:3 rms:1 forecasting:2 pollard:1 cause:1 useful:1 yw:2 amount:1 nonparametric:1 extensively:1 http:1 generate:1 outperform:1 blue:2 discrete:1 hyperparameter:1 drawn:1 timestep:3 sum:1 inverse:4 parameterized:2 uncertainty:6 angle:2 place:1 extends:1 draw:1 scaling:1 quadratic:1 activity:1 strength:1 occur:1 ahead:2 precisely:1 constrain:1 x2:4 generates:2 gpdm:33 speed:1 relatively:3 martin:1 department:2 structured:1 combination:1 smaller:1 remain:1 em:1 explained:1 ln:7 visualization:2 previously:1 kinematics:1 mathworks:1 irregularly:1 generalizes:1 available:1 away:1 appropriate:3 subtracted:1 drifting:1 substitute:1 original:2 denotes:1 assumes:1 ensure:1 remaining:1 include:1 graphical:2 marginalized:2 instant:1 ghahramani:1 build:1 murray:3 objective:1 added:1 dependence:1 subspace:1 sidenbladh:1 maccormick:1 concatenation:1 restart:1 nx:5 reason:1 assuming:1 length:1 code:2 index:1 y4:1 minimizing:3 difficult:2 hmc:3 holding:1 negative:1 incidental:1 observation:9 datasets:3 markov:4 gplvm:5 defining:1 extended:1 y1:6 frame:11 smoothed:1 expressiveness:1 drift:1 david:1 pair:2 extensive:1 redefining:1 optimized:4 learned:14 nearer:1 nip:7 trans:2 able:1 dynamical:22 usually:2 xm:1 including:1 green:3 video:1 power:1 difficulty:2 natural:2 hybrid:1 representing:1 numerous:1 started:1 auto:1 deviate:1 prior:11 literature:1 acknowledgement:1 xtt:2 determining:2 marginalizing:2 relative:1 fully:1 interesting:1 sufficient:1 eccv:1 supported:1 rasmussen:3 allow:1 fall:1 curve:3 dimension:3 xn:7 transition:3 avoids:1 depth:1 autoregressive:2 made:1 jump:1 reinforcement:1 far:2 reconstructed:3 ignore:1 buc:1 unreliable:1 global:2 overfitting:2 b1:1 assumed:1 xi:3 conti:1 latent:53 why:1 learn:4 nature:1 solak:2 complex:1 discourage:1 european:1 diag:1 da:3 hodgins:1 main:2 noise:5 animation:4 hyperparameters:7 nothing:1 fair:3 repeated:1 girard:1 x1:9 fig:12 depicts:1 cubic:1 ny:3 fails:1 theme:1 comprises:3 inferring:1 position:9 lie:1 weighting:1 learns:1 specific:2 xt:34 intrinsic:1 incorporating:1 joe:1 effectively:1 conditioned:2 kx:14 smoothly:2 simply:1 saddle:1 likely:1 explore:1 prevents:1 adjustment:1 nserc:1 tracking:5 applies:2 springer:1 corresponds:2 truth:2 acm:2 goal:1 formulated:1 acceleration:1 rbf:16 consequently:1 shared:1 determined:1 specifically:1 reducing:1 averaging:2 sampler:2 called:1 brand:1 aaron:1 select:3 people:3 tested:1 correlated:2 |
1,963 | 2,784 | Factorial Switching Kalman Filters for
Condition Monitoring in Neonatal Intensive
Care
Christopher K. I. Williams and John Quinn
School of Informatics, University of Edinburgh
Edinburgh EH1 2QL, UK
[email protected]
[email protected]
Neil McIntosh
Simpson Centre for Reproductive
Health, Edinburgh EH16 4SB, UK
[email protected]
Abstract
The observed physiological dynamics of an infant receiving intensive
care are affected by many possible factors, including interventions to the
baby, the operation of the monitoring equipment and the state of health.
The Factorial Switching Kalman Filter can be used to infer the presence of such factors from a sequence of observations, and to estimate the
true values where these observations have been corrupted. We apply this
model to clinical time series data and show it to be effective in identifying
a number of artifactual and physiological patterns.
1
Introduction
In a neonatal intensive care unit (NICU), an infant?s vital signs, including heart rate, blood
pressures, blood gas properties and temperatures, are continuously monitored and displayed
at the cotside. The levels of these measurements and the way they vary give an indication of
the baby?s health, but they can be affected by many different things. The potential factors
include handling of the baby, different cardiovascular and respiratory conditions, the effects
of drugs which have been administered, and the setup of the monitoring equipment. Each
factor has an effect on the dynamics of the observations, some by affecting the physiology
of the baby (such as an oxygen desaturation), and some by overwriting the measurements
with artifactual values (such as a probe dropout).
We use a Factorial Switching Kalman Filter (FSKF) to model such data. This consists of
three sets of variables which we call factors, state and observations, as indicated in Figure
1(a). There are a number of hidden factors; these are discrete variables, modelling for
example if the baby is in a normal respiratory state or not, or if a probe is disconnected or
not. The state of baby denotes continuous-valued quantities; this models the true values of
infant?s physiological variables, but also has dimensions to model certain artifact processes
(see below). The observations are those readings obtained from the monitoring equipment,
and are subject to corruption by artifact etc.
By describing the dynamical regime associated with each combination of factors as a linear
Gaussian model we obtain a FSKF, which extends the Switching Kalman Filter (see e.g.
[10, 3]) to incorporate multiple independent factors. With this method we can infer the
value of each factor and estimate the true values of vital signs during the times that the
measurements are obscured by artifact. By using an interpretable hidden state structure for
this application, domain knowledge can be used to set some of the parameters.
This paper demonstrates an application of the FSKF to NICU monitoring data. In Section
2 we introduce the model, and discuss the links to previous work in the field. In Section 3
we describe an approach for setting the parameters of the model and in Section 4 we show
results from the model when applied to NICU data. Finally we close with a discussion in
Section 5.
2
Model description
The Factorial Switching Kalman Filter is shown in Figure 1(a). In this model, M factors
(1)
(M )
ft . . . ft
affect the hidden continuous state xt and the observations yt . The factor f (m)
can take on K (m) different values. For example, a simple factor is ?ECG probe dropout?,
taking on two possible values, ?dropped out? or ?normal?. As factors in this application
can affect the observations either by altering the baby?s physiology or overwriting them
with artifactual values, the hidden state vector xt contains information on both the ?true?
physiological condition of the baby and on the levels of any artifactual processes.
The dynamical regime at time t is controlled by the ?switch? variable st , which is the cross
product of the individual factors,
(1)
st = ft
(M )
? . . . ? ft
.
(1)
For a given setting of st , the hidden continuous state and the observations are related by:
xt ? N (A(st )xt?1 + d(st ), Q(st )),
yt ? N (H(st )xt , R(st )),
(2)
where as in the SKF the system dynamics and observation process are dependent on the
switch variable. Here A(st ) is a square system matrix, d(st ) is a drift vector, H(st ) is the
state-observations matrix, and Q(st ) and R(st ) are noise covariance matrices. The factors
are taken to be a priori independent and first-order Markovian, so that
p(st |st?1 ) =
M
Y
(m)
p(ft
(m)
|ft?1 ) .
(3)
m=1
2.1
Application-specific setup
The continuous hidden state vector x contains two types of values, the true physiological
values, xp , and those of artifactual processes, xa . The true values are modelled as independent autoregressive processes, described in more detail in section 3. To represent this as a
state space, the vector xt has to contain the value of the current state and store the value of
the states at previous times.
Note that artifact state values can be affected by physiological state, but not the other way
round. For example, one factor we model is the arterial blood sample, seen in Figure 1(b),
lower panel. This occurs when a three-way valve is closed in the baby?s arterial line, in
order for a clinician to draw blood for a sample. While the valve is closed a pump works
against the pressure sensor, causing the systolic and diastolic blood pressure measurements
to rise artificially. The artifactual values in this case always start at around the value of the
baby?s diastolic blood pressure.
The factors modelled in these experiments are listed in Table 1. The dropout factors represent the case where probes are disconnected and measurements fall to zero on the channels
supplied by that probe. In this case, the true physiological values are completely hidden.
200
Factor 1 (artifactual)
150
HR
100
50
Factor 2 (physiological)
0
80
True state
70
Sys. BP
60
50
Artifactual state
40
Blood sample
ECG dropout
Observations
(a)
0
200
400
600
800
1000
1200
(b)
Figure 1: (a) shows a graphical representation of a Factorial Switching Kalman Filter, with
M = 2 factors. Squares are discrete values, circles are continuous and shaded nodes
are observed. Panel (b) shows ECG dropout and arterial blood sample events occurring
simultaneously. HR denotes heart rate, Sys. BP denotes the systolic blood pressure, and
times are in seconds. The dashed line indicates the estimate of true values and the greyscale
denotes two standard deviation error bars. We see uncertainty increasing while observations
are artifactual. The traces at the bottom show the inferred duration of the arterial blood
sample and ECG dropout events.
The transcutaneous probe (TCP) provides measurements of the partial pressure of oxygen
(TcPO2 ) and carbon dioxide (TcPCO2 ) in the baby?s blood, and is recalibrated every few
hours. This process has three stages: firstly calibration, where TcPO2 and TcPCO2 are set
to known values by applying a gas to the probe, secondly a stage where the probe is in
air and TcPCO2 drops to zero, and finally an equilibration phase where both values slowly
return to the physiological baseline when the probe is replaced.
As explained above, when an arterial blood sample is being taken one sees a characteristic
ramp in the blood pressure measurements. Temperature probe disconnection frequently
occurs in conjunction with handling. The core temperature probe is under the baby and can
come off when the baby is turned over for an examination, causing the readings to drop to
the ambient temperature level of the incubator over the course of a few minutes. When the
probe is reapplied, the measurements gradually return to the true level of the baby?s core
temperature.
Bradycardia is a genuine physiological occurrence where the heart rate temporarily drops,
often with a characteristic curve, then a systemic reaction brings the measurements back
to the baseline. The final factor models opening of the portals on the baby?s incubator.
Because the environment within the incubator is closely regulated, an intervention can be
inferred from a fall in the incubator humidity measurements. While the portals are open
and a clinician is handling the baby, we expect increased variability in the measurements
from the probes that are still attached.
2.2
Inference
For the application of real time clinical monitoring, we are interested in filtering, inferring
xt and st from the observations y1:t . However, the time taken for exact inference of the
posterior p(xt , st |y1:t ) scales exponentially with t, making it intractable. This is because
the probabilities of having moved between every possible combination of switch settings
in times t ? 1 and t are needed to calculate the posterior at time t. Hence the number of
FACTOR
5 Probe dropout factors: pulse oximeter,
ECG, arterial line, temperature probe,
transcutaneous probe
P OSSIBLE SETTINGS
1. Dropped out
2. Normal
TCP recalibration
1. O2 high, CO2 low 2. CO2 ? 0
3. Equilibration 4. Normal
Arterial blood sample
Temperature probe disconnection
1. Blood sample 2. Normal
1. Temperature probe disconnection
2. Reconnection 3. Normal
Bradycardia
1. Bradycardia onset
2. HR restabilisation
Incubator open
3. Normal
1. Incubator portals opened
2. Normal
Table 1: Description of factors.
Gaussians needed to represent the posterior exactly at each time step increases by a factor
QM
of K, the number of cross-product switch settings, where K = m=1 K (m) .
In this experiment we use the Gaussian Sum approximation [1]. At each time step we
maintain an approximation of p(xt |st , y1:t ) as a mixture of K Gaussians. Calculating the
Kalman updates and likelihoods for every possible setting of st+1 will result in the posterior
p(xt+1 |st+1 , y1:t+1 ) having K 2 mixture components, which can be collapsed back into K
components by matching means and variances of the distribution, as described in [6].
For comparison we also use Rao-Blackwellised particle filtering (RBPF) [7] for approximate inference. In this technique a number of particles are propagated through each time
step, each with a switch state st and an estimate of the mean and variance of xt . A value for
the switch state st+1 is obtained for each particle by sampling from the transition probabilities, after which Kalman updates are performed and a likelihood value can be calculated.
Based on this likelihood, particles can be either discarded or multiplied. Because Kalman
updates are not calculated for every possible setting of st+1 , this method can give a significant increase in speed when there are many factors, with some tradeoff in accuracy.
Both inference methods can be speeded up by considering the dropout factors. Because a
probe dropout always results in an observation of zero on the corresponding measurement
channels, the value of yt can be examined at each step. If it is not equal to zero then
we know that the likelihood of a dropout factor being active will be very low, so there is
no need to calculate it explicitly. Similarly, if any of the observations are zero then we
only perform Kalman updates and calculate likelihoods for those switch states with the
appropriate dropout setting.
2.3
Relation to previous work
The SKF and various approximations for inference have been described by many authors,
see e.g. [10, 3]. In [5], the authors used a 2-factor FSKF in a speech recognition application;
the two factors corresponded to (i) phones and (ii) the phone-to-spectrum transformation.
There has also been much prior work on condition monitoring in intensive care; here we
give a brief review of some of these studies and the relationship to our own work.
The specific problem of artifact detection in physiological time series data has been approached in a number of ways. For example Tsien [9] used machine learning techniques,
notably decision trees and logistic regression, to classify each observation yt as genuine
or artifactual. Hoare and Beatty [4] describe the use of time series analysis techniques
(ARIMA models, moving average and Kalman filters) to predict the next point in a patient?s monitoring trace. If the difference between the observed value and the predicted
value was outside a predetermined range, the data point was classified as artifactual. Our
application of a model with factorial state extends this work by explaining the specific
cause of an artifact, rather than just the fact that a certain data point is artifactual or not.
We are not aware of other work in condition monitoring using a FSKF.
3
Parameter estimation
We use hand-annotated training data from a number of babies to estimate the parameters of
the model.
Factor dynamics: Using equation 3 we can calculate the state transition probabilities from
(m)
(m)
the transition probabilities for individual state variables, P (ft
= a|ft?1 = b). The esP (m)
(m)
(m)
K
timates for these are given by P (ft
= a|ft?1 = b) = (nba + c) /
(n
+
c)
,
bc
c=1
where nba is the number of transitions from state b to state a in the training data. The
smoothing constant c (in our experiments we set c = 1) is added to stop any of the transition probabilities being zero or very small. While a zero probability could be useful for
a sequence of states that we know are impossible, in general we want to avoid it. This
solution can be justified theoretically as a maximum a posteriori estimate where the prior
is given by a Dirichlet distribution. The factor dynamics can be used to create left-to-right
models, e.g. for passing through the sequence O2 high, CO2 low; CO2 ? 0; equilibration
in the TCP recalibration case.
System dynamics: When no factor is active (i.e. non-normal), the baby is said to be in a
stable condition and has some capacity for self-regulation. In this condition we consider
each observation channel separately, and use standard methods to fit AR or ARIMA models
to each channel. Most channels vary around reference ranges when the baby is stable
and are well fitted by AR(2) models. Heart rate and blood pressure observation channels
are more volatile and stationarity is improved after differencing. Heart rate dynamics,
for example, are well fitted with an ARIMA(2,1,0) process. Representing trained AR or
ARIMA processes in state space form is then straightforward.
The observational data tends to have some high frequency noise on it (see e.g. Fig. 1(b),
lower panel) due to probe error and quantization effects. Thus we smooth sections of
stable data with a 21-point moving average in order to obtain training data for the system
dynamics.
The Yule-Walker equations are then used to set parameters for this moving-averaged data.
The fit can be verified for each observation channel by comparing the spectrum of new
data with the theoretical spectrum of the AR process (or the spectrum of the differenced
data for ARIMA processes), see e.g. [2]. The measurement noise matrix R is estimated by
calculating the variance of the differences between the original and averaged training data
for each measurement channel.
Above we have modelled the dynamics for a baby in the stable condition; we now describe
some of the system models used when the factors are active (i.e. non-normal). The drop
and rise in temperature measurements caused by a temperature probe disconnection closely
resemble exponential decay and can be therefore be fitted with an AR(1) process. This also
applies to the equilibration stage of a TCP recalibration.
The dynamics corresponding to the bradycardia factor are set by finding the mean slope
of the fall and rise in heart rate, which is used for the drift term d, then fitting an AR(1)
process to the residuals. The arterial blood sample dynamics are modelled with linear drift;
note that the variable in xa corresponding to the value of the arterial blood sample is tied
FHMM
GS
RBPF
Blood sample
AUC EER
0.97
0.02
0.99
0.01
0.62
0.46
TCP recal.
AUC EER
0.78
0.25
0.91
0.12
0.90
0.14
Bradycardia
AUC EER
0.67
0.42
0.72
0.39
0.76
0.37
TC disconnect
AUC
EER
0.75
0.35
0.88
0.19
0.85
0.32
Incu.
AUC
0.97
0.97
0.95
open
EER
0.07
0.06
0.08
Table 2: Inference results on evaluation data. FHMM denotes the Factorial Hidden
Markov Model, GS denotes the Gaussian Sum approximation, and RBPF denotes RaoBlackwellised particle filtering with 560 particles. AUC denotes area under ROC curve
and EER denotes the equal error rate.
to the diastolic blood pressure value while the factor is inactive. We also use linear drift to
model the drop in incubator humidity measurements corresponding to a clinician opening
the incubator portals.
We assume that the measurement noise from each probe is the same for physiological and
artifactual readings, for example if the core temperature probe is attached to the baby?s skin
or is reading ambient incubator temperature.
Combining factors: The parameters {A, H, Q, R, d} have to be supplied for every combination of factors. It might be thought that training data would be needed for each of these
possible combinations, but in practice parameters can be trained for factors individually
and then combined, as we know that some of the phenomena we want to model only affect
a subset of the channels, or override other phenomena [8]. This process of setting parameters for each combination of factor settings can be automated. The factors are arranged in
a partially ordered set, where later factors overwrite the dynamics A, Q, d or observations
H, R on at least one channel of their predecessor. For example, the ?bradycardia? factor
overwrites the heart rate dynamics of the normal state, while the ?ECG dropout? factor overwrites the heart rate observations; if both these things are happening simultaneously then
we expect the same observations as if there was only an ECG dropout, but the dynamics
of the true state xp are propagated as though there was only a bradycardia. Having found
this ordering it is straightforward to merge the trained parameters for every combination of
factors.
4
Results
Monitoring data was obtained from eight infants of 28 weeks gestation during their first
week of life, from the NICU at Edinburgh Royal Infirmary. The data for each infant was
collected every second for 24 hours, on nine channels: heart rate, systolic and diastolic
blood pressures, TcPO2 , TcPCO2 , O2 saturation, core temperature and incubator temperature and humidity. These infants were the first 8 in the NICU database who satisfied the
age criteria and were monitored on all 8 channels for some 24 hour period within their first
week. Four infants were used for training the model and four for evaluation. The test data
was annotated with the times of occurrences of the factors in Table 1 by a clinical expert
and one of the authors.
Some examples of inference under the model are shown in Figures 1(b) and 2. In Figure
1(b) two factors, arterial blood sample and ECG dropout are simultaneously active, and
the inference works nicely in this case, with growing uncertainty about the true value of
the heart-rate and blood pressure channels when artifactual readings are observed. The
upper panel in figure 2(a) shows two examples of bradycardia being detected. In the lower
panel, the model correctly infers the times that a clinician enters the incubator and replaces
a disconnected core temperature probe. Figure 2(b) illustrates the simultaneous detection
of a TCP artifact (the TCP recal state plotted is obtained by summing the probabilities of
80
200
Sys. BP
150
40
HR
100
50
Bradycardia
0
Dia. BP
20
40
60
80
70
60
50
40
30
10
38
Core temp.
60
TcPCO2
37
5
36
0
30
35
70
TcPO2
65
Incu humidity
20
10
60
0
55
Incu open
TC probe off
0
TCP recal
1000
2000
3000
Blood sample
0
(a)
500
1000
1500
2000
(b)
Figure 2: Inferred durations of physiological and artifactual states: (a) shows two episodes
of bradycardia (top), and a clinician entering the incubator and replacing the core temperature probe (bottom). Plot (b) shows the inference of two simultaneous artifact processes,
arterial blood sampling and TCP recalibration. Times are in seconds.
the three non-normal TCP states) and a blood sample spike.
In Table 2 we show the performance of the model on the test data. The inferred probabilities
for each factor were compared with the gold standard which has a binary value for each
factor setting at each time point. Inference was done using the Gaussian sum approximation
and RBPF, where the number of particles was set so that the two inference methods had
the same execution time. As a baseline we also used a Factorial Hidden Markov Model
(FHMM) to infer when each factor was active. This model has the same factor structure as
the FSKF, without any hidden continuous state. The FHMM parameters were set using the
same training data as the FSKF.
It can be seen that the FSKF generalised well to the data from the test set. Inferences using
the Gaussian Sum approximation had consistently higher area under the ROC curve and
lower equal error rates than the FHMM. In particular, the inferred times of blood samples and incubator opening were reliably detected. The lower performance of the FHMM,
which has no knowledge of the dynamics, illustrates the difficulty caused by baseline physiological levels changing over time and between babies.
Inference results using Rao-Blackwellised particle filtering were less consistent. For blood
sampling and opening of the incubator the performance was worse than the baseline model,
though in detecting bradycardia the performance was marginally higher than for inferences
made using either the FHMM or the Gaussian Sum approximation.
Execution times for inference on 24 hours of monitoring data with the set of factors listed
in Table 1 on a 3.2GHz processor were approximately 7 hours 10 minutes for the FSKF
inference, and 100 seconds for the FHMM.
5
Discussion
In this paper we have shown that the FSKF model can be applied successfully to complex
monitoring data from a neonatal intensive care unit.
There are a number of directions in which this work can be extended. Firstly, for simplicity
we have used univariate autoregressive models for each component of the observations;
it would be interesting to fit a multivariate model to this data instead, as we expect that
there will be correlations between the channels. Also, there are additional factors that can
be incorporated into the model, for example to model a pneumothorax event, where air
becomes trapped inside the chest between the chest wall and the lung, causing the lung to
collapse. Fortunately this event is relatively rare so it was not seen in the data we have
analyzed in this experiment.
Acknowledgements
We thank Birgit Wefers for providing expert annotation of the evaluation data set, and
the anonymous referees for their comments which helped improve the paper. This work
was funded in part by a grant from the premature baby charity BLISS. The work was also
supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. This publication only reflects the authors?
views.
References
[1] D. L. Alspach and H. W. Sorenson. Nonlinear Bayesian Estimation Using Gaussian Sum Approximations. IEEE Transactions on Automatic Control, 17(4):439?448,
1972.
[2] C. Chatfield. The Analysis of Time Series: An Introduction. Chapman and Hall,
London, 4th edition, 1989.
[3] Z. Ghahramani and G. E. Hinton. Variational Learning for Switching State-Space
Models. Neural Computation, 12(4):963?996, 1998.
[4] S.W. Hoare and P.C.W. Beatty. Automatic artifact identification in anaesthesia patient record keeping: a comparison of techniques. Medical Engineering and Physics,
22:547?553, 2000.
[5] J. Ma and L. Deng. A mixed level switching dynamic system for continuous speech
recognition. Computer Speech and Language, 18:49?65, 2004.
[6] K. Murphy. Switching Kalman filters. Technical report, U.C. Berkeley, 1998.
[7] K. Murphy and S. Russell. Rao-Blackwellised particle filtering for dynamic Bayesian
networks. In A. Doucet, N. de Freitas, and N. Gordon, editors, Sequential Monte
Carlo in Practice. Springer-Verlag, 2001.
[8] A. Spengler. Neonatal baby monitoring. Master?s thesis, School of Informatics, University of Edinburgh, 2003.
[9] C. Tsien. TrendFinder: Automated Detection of Alarmable Trends. PhD thesis, MIT,
2000.
[10] M. West and P. J. Harrison. Bayesian Forecasting and Dynamic Models. SpringerVerlag, 1997. Second edition.
| 2784 |@word humidity:4 open:4 pulse:1 covariance:1 pressure:11 series:4 contains:2 bc:1 o2:3 reaction:1 freitas:1 current:1 comparing:1 john:2 predetermined:1 drop:5 interpretable:1 update:4 plot:1 infant:7 sys:3 tcp:10 core:7 record:1 provides:1 detecting:1 node:1 firstly:2 predecessor:1 consists:1 fitting:1 inside:1 introduce:1 excellence:1 theoretically:1 notably:1 frequently:1 growing:1 valve:2 hoare:2 considering:1 increasing:1 becomes:1 panel:5 finding:1 transformation:1 blackwellised:3 every:7 berkeley:1 exactly:1 demonstrates:1 qm:1 uk:5 birgit:1 unit:2 grant:1 intervention:2 control:1 medical:1 cardiovascular:1 generalised:1 dropped:2 engineering:1 tends:1 switching:9 merge:1 approximately:1 might:1 examined:1 ecg:8 shaded:1 collapse:1 speeded:1 diastolic:4 systemic:1 range:2 averaged:2 practice:2 area:2 drug:1 physiology:2 thought:1 matching:1 eer:6 close:1 collapsed:1 applying:1 impossible:1 yt:4 williams:2 arterial:11 straightforward:2 duration:2 simplicity:1 identifying:1 equilibration:4 overwrite:1 exact:1 referee:1 trend:1 recognition:2 timates:1 database:1 observed:4 ft:10 bottom:2 enters:1 calculate:4 episode:1 ordering:1 russell:1 environment:1 co2:4 dynamic:18 trained:3 completely:1 various:1 effective:1 describe:3 london:1 monte:1 detected:2 approached:1 corresponded:1 outside:1 valued:1 ramp:1 neil:2 final:1 sequence:3 indication:1 product:2 causing:3 turned:1 combining:1 gold:1 description:2 moved:1 ac:3 school:2 predicted:1 resemble:1 come:1 direction:1 closely:2 annotated:2 filter:8 opened:1 observational:1 wall:1 anonymous:1 secondly:1 around:2 hall:1 normal:12 predict:1 week:3 anaesthesia:1 vary:2 estimation:2 individually:1 create:1 successfully:1 reflects:1 mit:1 raoblackwellised:1 sensor:1 overwriting:2 gaussian:7 always:2 rather:1 avoid:1 publication:1 conjunction:1 consistently:1 modelling:1 indicates:1 likelihood:5 equipment:3 baseline:5 posteriori:1 inference:16 dependent:1 sb:1 hidden:10 relation:1 interested:1 pascal:1 priori:1 smoothing:1 field:1 genuine:2 equal:3 having:3 aware:1 sampling:3 nicely:1 chapman:1 report:1 gordon:1 few:2 opening:4 simultaneously:3 individual:2 murphy:2 replaced:1 phase:1 maintain:1 detection:3 stationarity:1 simpson:1 evaluation:3 mixture:2 analyzed:1 ambient:2 partial:1 sorenson:1 skf:2 tree:1 gestation:1 circle:1 plotted:1 obscured:1 theoretical:1 fitted:3 increased:1 classify:1 markovian:1 rao:3 ar:6 altering:1 deviation:1 subset:1 pump:1 rare:1 corrupted:1 combined:1 st:23 off:2 informatics:2 receiving:1 physic:1 continuously:1 thesis:2 satisfied:1 slowly:1 worse:1 expert:2 return:2 potential:1 de:1 disconnect:1 bliss:1 explicitly:1 caused:2 onset:1 performed:1 later:1 helped:1 closed:2 view:1 start:1 lung:2 annotation:1 slope:1 square:2 air:2 accuracy:1 variance:3 characteristic:2 who:1 fhmm:8 modelled:4 bayesian:3 identification:1 marginally:1 carlo:1 monitoring:13 corruption:1 processor:1 classified:1 simultaneous:2 ed:3 against:1 recalibration:4 frequency:1 associated:1 monitored:2 propagated:2 stop:1 knowledge:2 reapplied:1 infers:1 back:2 higher:2 improved:1 arranged:1 done:1 though:2 xa:2 stage:3 just:1 correlation:1 hand:1 christopher:1 replacing:1 nonlinear:1 logistic:1 brings:1 artifact:9 indicated:1 effect:3 contain:1 true:12 hence:1 entering:1 round:1 during:2 self:1 auc:6 criterion:1 override:1 temperature:16 oxygen:2 variational:1 volatile:1 attached:2 exponentially:1 measurement:17 significant:1 mcintosh:2 automatic:2 similarly:1 particle:9 centre:1 language:1 had:2 funded:1 moving:3 calibration:1 stable:4 etc:1 posterior:4 own:1 multivariate:1 phone:2 store:1 certain:2 verlag:1 binary:1 life:1 baby:24 seen:3 additional:1 care:5 fortunately:1 deng:1 period:1 dashed:1 ii:1 multiple:1 infer:3 smooth:1 technical:1 clinical:3 cross:2 controlled:1 regression:1 patient:2 represent:3 justified:1 affecting:1 want:2 separately:1 harrison:1 walker:1 comment:1 subject:1 thing:2 call:1 chest:2 presence:1 vital:2 automated:2 switch:7 affect:3 fit:3 tradeoff:1 intensive:5 administered:1 inactive:1 chatfield:1 forecasting:1 speech:3 passing:1 cause:1 nine:1 useful:1 listed:2 factorial:8 supplied:2 sign:2 estimated:1 trapped:1 correctly:1 discrete:2 arima:5 affected:3 ist:2 four:2 blood:28 changing:1 verified:1 sum:6 uncertainty:2 master:1 extends:2 draw:1 decision:1 nba:2 dropout:14 replaces:1 g:2 bp:4 speed:1 relatively:1 combination:6 disconnected:3 temp:1 making:1 explained:1 gradually:1 heart:10 taken:3 dioxide:1 equation:2 describing:1 discus:1 needed:3 know:3 overwrites:2 dia:1 operation:1 gaussians:2 multiplied:1 apply:1 probe:26 eight:1 quinn:2 appropriate:1 occurrence:2 original:1 denotes:9 dirichlet:1 include:1 top:1 graphical:1 calculating:2 neonatal:4 ghahramani:1 skin:1 eh1:1 quantity:1 occurs:2 added:1 spike:1 said:1 regulated:1 link:1 thank:1 charity:1 capacity:1 collected:1 systolic:3 kalman:12 relationship:1 providing:1 differencing:1 ql:1 setup:2 regulation:1 carbon:1 greyscale:1 trace:2 rise:3 reliably:1 perform:1 upper:1 observation:23 markov:2 discarded:1 gas:2 displayed:1 extended:1 variability:1 incorporated:1 hinton:1 y1:4 transcutaneous:2 community:1 drift:4 inferred:5 differenced:1 hour:5 bar:1 below:1 pattern:1 dynamical:2 regime:2 reading:5 saturation:1 including:2 royal:1 event:4 difficulty:1 examination:1 hr:4 residual:1 representing:1 improve:1 brief:1 health:3 prior:2 review:1 acknowledgement:1 recalibrated:1 expect:3 mixed:1 interesting:1 filtering:5 age:1 rbpf:4 xp:2 consistent:1 editor:1 course:1 supported:1 keeping:1 disconnection:4 fall:3 explaining:1 taking:1 edinburgh:5 ghz:1 curve:3 dimension:1 calculated:2 transition:5 autoregressive:2 author:4 made:1 premature:1 programme:1 transaction:1 approximate:1 doucet:1 active:5 summing:1 spectrum:4 recal:3 continuous:7 table:6 channel:14 complex:1 artificially:1 european:1 domain:1 noise:4 edition:2 respiratory:2 fig:1 west:1 roc:2 inferring:1 exponential:1 tied:1 minute:2 yule:1 xt:11 specific:3 reproductive:1 decay:1 physiological:14 intractable:1 quantization:1 sequential:1 phd:1 execution:2 portal:4 illustrates:2 occurring:1 tsien:2 artifactual:15 tc:2 univariate:1 happening:1 nicu:5 ordered:1 temporarily:1 partially:1 applies:1 springer:1 ma:1 springerverlag:1 clinician:5 e:1 incorporate:1 phenomenon:2 handling:3 |
1,964 | 2,785 | Learning to Control an Octopus Arm with
Gaussian Process Temporal Difference Methods
Yaakov Engel?
AICML, Dept. of Computing Science
University of Alberta
Edmonton, Canada
[email protected]
Peter Szabo and Dmitry Volkinshtein
Dept. of Electrical Engineering
Technion Institute of Technology
Haifa, Israel
[email protected]
[email protected]
Abstract
The Octopus arm is a highly versatile and complex limb. How the Octopus controls such a hyper-redundant arm (not to mention eight of them!)
is as yet unknown. Robotic arms based on the same mechanical principles may render present day robotic arms obsolete. In this paper, we
tackle this control problem using an online reinforcement learning algorithm, based on a Bayesian approach to policy evaluation known as
Gaussian process temporal difference (GPTD) learning. Our substitute
for the real arm is a computer simulation of a 2-dimensional model of
an Octopus arm. Even with the simplifications inherent to this model,
the state space we face is a high-dimensional one. We apply a GPTDbased algorithm to this domain, and demonstrate its operation on several
learning tasks of varying degrees of difficulty.
1
Introduction
The Octopus arm is one of the most sophisticated and fascinating appendages found in
nature. It is an exceptionally flexible organ, with a remarkable repertoire of motion. In
contrast to skeleton-based vertebrate and present-day robotic limbs, the Octopus arm lacks
a rigid skeleton and has virtually infinitely many degrees of freedom. As a result, this arm is
highly hyper-redundant ? it is capable of stretching, contracting, folding over itself several
times, rotating along its axis at any point, and following the contours of almost any object.
These properties allow the Octopus to exhibit feats requiring agility, precision and force.
For instance, it is well documented that Octopuses are able to pry open a clam or remove
the plug off a glass jar, to gain access to its contents [1].
The basic mechanism underlying the flexibility of the Octopus arm (as well as of other
organs, such as the elephant trunk and vertebrate tongues) is the muscular hydrostat [2].
Muscular hydrostats are organs capable of exerting force and producing motion with the
sole use of muscles. The muscles serve in the dual roles of generating the forces and
maintaining the structural rigidity of the appendage. This is possible due to a constant
volume constraint, which arises from the fact that muscle tissue is incompressible. Proper
?
To whom correspondence should be addressed. Web site: www.cs.ualberta.ca/?yaki
use of this constraint allows muscle contractions in one direction to generate forces acting
in perpendicular directions.
Due to their unique properties, understanding the principles governing the movement and
control of the Octopus arm and other muscular hydrostats is of great interest to both physiologists and robotics engineers. Recent physiological and behavioral studies produced
some interesting insights to the way the Octopus plans and controls its movements. Gutfreund et al. [3] investigated the reaching movement of an Octopus arm and showed that
the motion is performed by a stereotypical forward propagation of a bend point along the
arm. Yekutieli et al. [4] propose that the complex behavioral movements of the Octopus
are composed from a limited number of ?motion primitives?, which are spatio-temporally
combined to produce the arm?s motion.
Although physical implementations of robotic arms based on the same principles are not
yet available, recent progress in the technology of ?artificial muscles? using electroactive
polymers [5] may allow the construction of such arms in the near future. Needless to say,
even a single such arm poses a formidable control challenge, which does not appear to be
amenable to conventional control theoretic or robotics methodology. In this paper we propose a learning approach for tackling this problem. Specifically, we formulate the task of
bringing some part of the arm into a goal region as a reinforcement learning (RL) problem.
We then proceed to solve this problem using Gaussian process temporal difference learning
(GPTD) algorithms [6, 7, 8].
2
The Domain
Our experimental test-bed is a finite-elements computer simulation of a planar variant of the
Octopus arm, described in [9, 4]. This model is based on a decomposition of the arm into
quadrilateral compartments, and the constant muscular volume constraint mentioned above
is translated into a constant area constraint on each compartment. Muscles are modeled
as dampened springs and the mass of each compartment is concentrated in point masses
located at its corners1 . Although this is a rather crude approximation of the real arm, even
for a modest 10-segment model there are already 88 continuous state variables2 , making
this a rather high dimensional learning problem. Figure 1 illustrates this model.
Since our model is 2?dimensional, all force vectors lie on the x ? y plane, and the arm?s
motion is planar. This limitation is due mainly to the high computational cost of the full
3?dimensional calculations for any arm of reasonable size. There are four types of forces
acting on the arm: 1) The internal forces generated by the arm?s muscles, 2) the vertical
forces caused by the influence of gravity and the arm?s buoyancy in the medium in which it
is immersed (typically sea water), 3) drag forces produced by the arm?s motion through this
medium, and 4) internal pressure-induced forces responsible for maintaining the constant
volume of each compartment. The use of simulation allows us to easily investigate different
operating scenarios, such as zero or low gravity scenarios, different media, such as water,
air or vacuum, and different muscle models. In this study, we used a simple linear model
for the muscles. The force applied by a muscle at any given time t is
d?(t)
F (t) = k0 + (kmax ? k0 )A(t) ?(t) ? ?rest + c
.
dt
1
For the purpose of computing volumes, masses, friction and muscle strength, the arm is effectively defined in three dimensions. However, no forces or motion are allowed in the third dimension.
We also ignore the suckers located along the ventral side of the arm, and treat the arm as if it were
symmetric with respect to reflection along its long axis. Finally, we comment that this model is
restricted to modeling the mechanics of the arm and does not attempt to model its nervous system.
2
10 segments result in 22 point masses, each being described by 4 state variables ? the x and y
coordinates and their respective first time-derivatives.
arm tip
1
0
11
00 C11
00
1
0
0
1
0
1
00
arm base
0
1
0
1
0
1 11
1 1
0 0
0 0
1
1 1
0
0 1
11
00
11
00
00
11
C
00 ventral side
11
0
1
0
1
pair #1
11
1 00
0
dorsal side
N
pair #N+1
1
11
00
00
11
00
11
transverse muscle
11
00
00
11
11
00
00
11
00
11
00
11
11
00
00
11
00
11
longitudinal muscle
11
00
00
11
00
11
111
000
000
111
00
11
11
00
00
11
transverse muscle
11
00
00longitudinal muscle
11
Figure 1: An N compartment simulated Octopus arm. Each constant area compartment Ci
is defined by its surrounding 2 longitudinal muscles (ventral and dorsal) and 2 transverse
muscles. Circles mark the 2N + 2 point masses in which the arm?s mass is distributed. In
the bottom right one compartment is magnified with additional detail.
This equation describes a dampened spring with a controllable spring constant. The
spring?s length at time t is ?(t), its resting length, at which it does not apply any force
is ?rest .3 The spring?s stiffness is controlled by the activation variable A(t) ? [0, 1]. Thus,
when the activation is zero, and the contraction is isometric (with zero velocity), the relaxed
muscle exhibits a baseline passive stiffness k0 . In a fully activated isometric contraction
the spring constant becomes kmax . The second term is a dampening, energy dissipating
term, which is proportional to the rate of change in the spring?s length, and (with c > 0) is
directed to resist that change. This is a very simple muscle model, which has been chosen
mainly due to its low computational cost, and the relative ease of computing the energy
expended by the muscle (why this is useful will become apparent in the sequel). More
complex muscle models can be easily incorporated into the simulator, but may result in
higher computational overhead. For additional details on the modeling of the other forces
and on the derivation of the equations of motion, refer to [4].
3
The Learning Algorithms
As mentioned above, we formulate the problem of controlling our Octopus arm as a RL
problem. We are therefore required to define a Markov decision process (MDP), consisting
of state and action spaces, a reward function and state transition dynamics. The states in
our model are the Cartesian coordinates of the point masses and their first time-derivatives.
A finite (and relatively small) number of actions are defined by specifying, for each action,
a set of activations for the arm?s muscles. The actions used in this study are depicted
in Figure 2. Given the arm?s current state and the chosen action, we use the simulator
to compute the arm?s state after a small fixed time interval. Throughout this interval the
activations remain fixed, until a new action is chosen for the next interval. The reward is
defined as ?1 for non-goal states, and 10 for goal states. This encourages the controller
to find policies that bring the arm to the goal as quickly as possible. In addition, in order
to encourage smoothness and economy in the arm?s movements, we subtract an energy
penalty term from these rewards. This term is proportional to the total energy expended
by all muscles during each action interval. Training is performed in an episodic manner:
Upon reaching a goal, the current episode terminates and the arm is placed in a new initial
position to begin a new episode. If a goal is not reached by some fixed amount of time, the
3
It is assumed that at all times ?(t) ? ?rest . This is meant to ensure that our muscles can only
apply force by contracting, as real muscles do. This can be assured by endowing the compartments
with sufficiently high volumes, or equivalently, by setting ?rest sufficiently low.
episode terminates regardless.
Action # 1
Action # 2
Action # 3
Action # 4
Action # 5
Action # 6
Figure 2: The actions used in the fixed-base experiments. Line thickness is proportional to
activation intensity. For the rotating base experiment, these actions were augmented with
versions of actions 1, 2, 4 and 5 that include clockwise and anti-clockwise torques applied
to the arm?s base.
The RL algorithms implemented in this study belong to the Policy Iteration family of algorithms [10]. Such algorithms require an algorithmic component for estimating the mean
sum of (possibly discounted) future rewards collected along trajectories, as a function of
the trajectory?s initial state, also known as the value function. The best known RL algorithms for performing this task are temporal difference algorithms. Since the state space
of our problem is very large, some form of function approximation must be used to represent the value estimator. Temporal difference methods, such as TD(?) and LSTD(?), are
provably convergent when used with linearly parametrized function approximation architectures [10]. Used this way, they require the user to define a fixed set of basis functions,
which are then linearly combined to approximate the value function. These basis functions
must be defined over the entire state space, or at least over the subset of states that might
be reached during learning. When local basis functions are used (e.g., RBFs or tile codes
[11]), this inevitably means an exponential explosion of the number of basis functions with
the dimensionality of the state space. Nonparametric GPTD learning algorithms4 [8], offer
an alternative to the conventional parametric approach. The idea is to define a nonparametric statistical generative model connecting the hidden values and the observed rewards, and
a prior distribution over value functions. The GPTD modeling assumptions are that both
the prior and the observation-noise distributions are Gaussian, and that the model equations relating values and rewards have a special linear form. During or following a learning
session, in which a sequence of states and rewards are observed, Bayes? rule may be used
to compute the posterior distribution over value functions, conditioned on the observed reward sequence. Due to the GPTD model assumptions, this distribution is also Gaussian,
and is derivable in closed form. The benefits of using (nonparametric) GPTD methods are
that 1) the resulting value estimates are generally not constrained to lie in the span of any
predetermined set of basis functions, 2) no resources are wasted on unvisited state and action space regions, and 3) rather than the point estimates provided by other methods, GPTD
methods provide complete probability distributions over value functions.
In [6, 7, 8] it was shown how the computation of the posterior value GP moments can
be performed sequentially and online. This is done by a employing a forward selection
mechanism, which is aimed at attaining a sparse approximation of the posterior moments,
under a constraint on the resulting error. The input samples (states, or state-action pairs)
used in this approximation are stored in a dictionary, the final size of which is often a good
indicator of the problem?s complexity. Since nonparametric GPTD algorithms belong to the
family of kernel machines, they require the user to define a kernel function, which encodes
her prior knowledge and beliefs concerning similarities and correlations in the domain at
hand. More specifically, the kernel function k(?, ?) defines the prior covariance of the value
process. Namely, for two arbitrary states x and x? , Cov[V (x), V (x? )] = k(x, x? ) (see [8]
for details). In this study we experimented with several kernel functions, however, in this
4
GPTD models can also be defined parametrically, see [8].
paper we will describe results obtained using a third degree polynomial kernel, defined
3
by k(x, x? ) = x? x? + 1 . It is well known that this kernel induces a feature space of
monomials of degree 3 or less [12]. For
our 88 dimensional input space, this feature space
is spanned by a basis consisting of 91
3 = 121,485 linearly independent monomials.
We experimented with two types of policy-iteration based algorithms. The first was optimistic policy iteration (OPI), in which, in any given time-step, the current GPTD value
estimator is used to evaluate the successor states resulting from each one of the actions
available at the current state. Since, given an action, the dynamics are deterministic, we
used the simulation to determine the identity of successor states. An action is then chosen according to a semi-greedy selection rule (more on this below). A more disciplined
approach is provided by a paired actor-critic algorithm. Here, two independent GPTD
estimators are maintained. The first is used to determine the policy, again, by some semigreedy action selection rule, while its parameters remain fixed. In the meantime, the second
GPTD estimator is used to evaluate the stationary policy determined by the first. After the
second GPTD estimator is deemed sufficiently accurate, as indicated by the GPTD value
variance estimate, the roles are reversed. This is repeated as many times as required, until
no significant improvement in policies is observed.
Although the latter algorithm, being an instance of approximate policy iteration, has a
better theoretical grounding [10], in practice it was observed that the GPTD-based OPI
worked significantly faster in this domain. In the experiments reported in the next section
we therefore used the latter. For additional details and experiments refer to [13]. One final
wrinkle concerns the selection of the initial state in a new episode. Since plausible arm
configurations cannot be attained by randomly drawing 88 state variable from some simple
distribution, a more involved mechanism for setting the initial state in each episode has to
be defined. The method we chose is tightly connected to the GPTD mode of operation: At
the end of each episode, 10 random states were drawn from the GPTD dictionary. From
these, the state with the highest posterior value variance estimate was selected as the initial
state of the next episode. This is a form of active learning, which is made possible by
employing GPTD, and that is applicable to general episodic RL problems.
4
Experiments
The experiments described in this section are aimed at demonstrating the applicability of
GPTD-based algorithms to large-scale RL problems, such as our Octopus arm. In these
experiments we used the simulated 10-compartment arm described in Section 2. The set
of goal states consisted of a circular region located somewhere within the potential reach
of the arm (recall that the arm has no fixed length). The action set depends on the task, as
described in Figure 2. Training episode duration was set to 4 seconds, and the time interval
between action decisions was 0.4 seconds. This allowed a maximum of 10 learning steps
per trial. The discount factor was set to 1.
The exploration policy used was the ubiquitous ?-greedy policy: The greedy action (i.e. the
one for which the sum of the reward and the successor state?s estimated value is the highest)
is chosen with probability 1 ? ?, and with probability ? a random action is drawn from a
uniform distribution over all other actions. The value of ? is reduced during learning, until
the policy converges to the greedy one. In our implementation, in each episode, ? was
dependent on the number of successful episodes experienced up to that point. The general
form of this relation is ? = ?0 N 12 /(N 12 +Ngoals ), where Ngoals is the number of successful
episodes, ?0 is the initial value of ? and N 21 is the number of successful episodes required
to reduce ? to ?0 /2.
In order to evaluate the quality of learned solutions, 100 initial arm configurations were cre-
Figure 3: Examples of initial states for the rotating-base experiments (left) and the fixedbase experiments (right). Starting states also include velocities, which are not shown.
ated. This was done by starting a simulation from some fixed arm configuration, performing a long sequence of random actions, and sampling states randomly from the resulting
trajectory. Some examples of such initial states are depicted in Figure 3. During learning,
following each training episode, the GPTD-learned parameters were recorded on file. Each
set of GPTD parameters defines a value estimator, and therefore also a greedy policy with
respect to the posterior value mean. Each such policy was evaluated by using it, starting
from each of the 100 initial test states. For each starting state, we recorded whether or not
a goal state was reached within the episode?s time limit (4 seconds), and the duration of the
episode (successful episodes terminate when a goal state is reached). These two measures
of performance were averaged over the 100 starting states and plotted against the episode
index, resulting in two corresponding learning curves for each experiment5 .
We started with a simple task in which reaching the goal is quite easy. Any point of the
arm entering the goal circle was considered as a success. The arm?s base was fixed and the
gravity constant was set to zero, corresponding to a scenario in which the arm moves on
a horizontal frictionless plane. In the second experiment the task was made a little more
difficult. The goal was moved further away from the base of the arm. Moreover, gravity
was set to its natural level, of 9.8 sm2 , with the motion of the arm now restricted to a vertical
plane. The learning curves corresponding to these two experiments are shown in Figure 4.
A success rate of 100% was reached after 10 and 20 episodes, respectively. In both cases,
even after a success rate of 100% is attained, the mean time-to-goal keeps improving. The
final dictionaries contained about 200 and 350 states, respectively.
In our next two experiments, the arm had to reach a goal located so that it cannot be reached
unless the base of the arm is allowed to rotate. We added base-rotating actions to the
basic actions used in the previous experiments (see Figure 2 for an explanation). Allowing
a rotating base significantly increases the size of the action set, as well the size of the
reachable state space, making the learning task considerably more difficult. To make things
even more difficult, we rewarded the arm only if it reached the goal with its tip, i.e. the
two point-masses at the end of the arm. In the first experiment in this series, gravity was
switched on. A 99% success rate was attained after 270 trials, with a final dictionary size of
5
It is worth noting that this evaluation procedure requires by far more time than the actual learning,
since each point in the graphs shown below requires us to perform 100 simulation runs. Whereas
learning can be performed almost in real-time (depending on dictionary size), computing the statistics
for a single learning run may take a day, or more.
Figure 4: Success rate (solid) and mean time to goal (dashed) for a fixed-base arm in zero
gravity (left), and with gravity (right). 100% success was reached after 10 and 20 trials,
respectively. The insets illustrate one starting position and the location of the goal regions,
in each case.
about 600 states. In the second experiment gravity was switched off, but a circular region
of obstacle states was placed between the arm?s base and the goal circle. If any part of the
arm touched the obstacle, the episode immediately terminated with a negative reward of -2.
Here, the success rate peaked at 40% after around 1000 episodes, and remained roughly
constant thereafter. It should be taken into consideration that at least some of the 100 test
starting states are so close to the obstacle that, regardless of the action taken, the arm cannot
avoid hitting the obstacle. The learning curves are presented in Figure 5.
Figure 5: Success rate (solid) and mean time to goal (dashed) for a rotating-base arm with
gravity switched on (left), and with gravity switched off but with an obstacle blocking the
direct path to the goal (right). The arm has to rotate its base in order to reach the goal in
either case (see insets). Positive reward was given only for arm-tip contact, any contact
with the obstacle terminated the episode with a penalty. A 99% success rate was attained
after 270 episodes for the first task, whereas for the second task success rate reached 40%.
Video movies showing the arm in various
www.cs.ualberta.ca/?yaki/movies/.
5
scenarios
are
available
at
Discussion
Up to now, GPTD based RL algorithms have only been tested on low dimensional problem
domains. Although kernel methods have handled high-dimensional data, such as handwrit-
ten digits, remarkably well in supervised learning domains, the applicability of the kernelbased GPTD approach to high dimensional RL problems has remained an open question.
The results presented in this paper are, in our view, a clear indication that GPTD methods are indeed scalable, and should be considered seriously as a possible solution method
by practitioners facing large-scale RL problems. Further work on the theory and practice of GPTD methods is called for. Standard techniques for model selection and tuning
of hyper-parameters can be incorporated straightforwardly into GPTD algorithms. Value
iteration-based variants, i.e. ?GPQ-learning?, would provide yet another useful set of tools.
The Octopus arm domain is of independent interest, both to physiologists and robotics
engineers. The fact that reasonable controllers for such a complex arm can be learned from
trial and error, in a relatively short time, should not be understated. Further work in this
direction should be aimed at extending the Octopus arm simulation to a full 3-dimensional
model, as well as applying our RL algorithms to real robotic arms based on the muscular
hydrostat principle, when these become available.
Acknowledgments
Y. E. was partially supported by the AICML and the Alberta Ingenuity fund. We would
also like to thank the Ollendorff Minerva Center, for supporting this project.
References
[1] G. Fiorito, C. V. Planta, and P. Scotto. Problem solving ability of Octopus Vulgaris Lamarck
(Mollusca, Cephalopoda). Behavioral and Neural Biology, 53 (2):217?230, 1990.
[2] W.M. Kier and K.K. Smith. Tongues, tentacles and trunks: The biomechanics of movement in
muscular-hydrostats. Zoological Journal of the Linnean Society, 83:307?324, 1985.
[3] Y. Gutfreund, T. Flash, Y. Yarom, G. Fiorito, I. Segev, and B. Hochner. Organization of Octopus
arm movements: A model system for studying the control of flexible arms. The journal of
Neuroscience, 16:7297?7307, 1996.
[4] Y. Yekutieli, R. Sagiv-Zohar, R. Aharonov, Y. Engel, B. Hochner, and T. Flash. A dynamic
model of the Octopus arm. I. Biomechanics of the Octopus reaching movement. Journal of
Neurophysiology (in press), 2005.
[5] Y. Bar-Cohen, editor. Electroactive Polymer (EAP) Actuators as Artificial Muscles - Reality,
Potential and Challenges. SPIE Press, 2nd edition, 2004.
[6] Y. Engel, S. Mannor, and R. Meir. Bayes meets Bellman: The Gaussian process approach
to temporal difference learning. In Proc. of the 20th International Conference on Machine
Learning, 2003.
[7] Y. Engel, S. Mannor, and R. Meir. Reinforcement learning with Gaussian processes. In Proc.
of the 22nd International Conference on Machine Learning, 2005.
[8] Y. Engel. Algorithms and Representations for Reinforcement Learning. PhD thesis, The Hebrew
University of Jerusalem, 2005. www.cs.ualberta.ca/?yaki/papers/thesis.ps.
[9] R. Aharonov, Y. Engel, B. Hochner, and T. Flash. A dynamical model of the octopus arm.
In Neuroscience letters. Supl. 48. Proceedings of the 6th annual meeting of the Israeli Neuroscience Society, 1997.
[10] D.P. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996.
[11] R.S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[12] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University
Press, Cambridge, England, 2004.
[13] Y. Engel, P. Szabo, and D. Volkinshtein. Learning to control an Octopus arm with Gaussian
process temporal difference methods. Technical report, Technion Institute of Technology, 2005.
www.cs.ualberta.ca/?yaki/reports/octopus.pdf.
| 2785 |@word neurophysiology:1 trial:4 version:1 polynomial:1 nd:2 open:2 simulation:7 gptd:26 contraction:3 decomposition:1 covariance:1 hochner:3 pressure:1 mention:1 solid:2 versatile:1 moment:2 initial:10 configuration:3 series:1 seriously:1 longitudinal:3 quadrilateral:1 current:4 com:2 activation:5 gmail:2 yet:3 tackling:1 must:2 predetermined:1 remove:1 fund:1 dampened:2 stationary:1 generative:1 obsolete:1 greedy:5 nervous:1 selected:1 plane:3 smith:1 short:1 zoological:1 mannor:2 location:1 along:5 direct:1 become:2 overhead:1 behavioral:3 manner:1 indeed:1 ingenuity:1 roughly:1 mechanic:1 simulator:2 torque:1 bellman:1 discounted:1 alberta:2 td:1 eap:1 little:1 actual:1 vertebrate:2 becomes:1 begin:1 estimating:1 project:1 underlying:1 moreover:1 mass:8 medium:3 formidable:1 israel:1 provided:2 gutfreund:2 magnified:1 temporal:7 tackle:1 gravity:10 control:9 appear:1 producing:1 bertsekas:1 positive:1 engineering:1 variables2:1 treat:1 local:1 limit:1 sutton:1 meet:1 path:1 might:1 chose:1 drag:1 specifying:1 ease:1 limited:1 perpendicular:1 tentacle:1 averaged:1 directed:1 unique:1 responsible:1 acknowledgment:1 practice:2 digit:1 procedure:1 episodic:2 area:2 significantly:2 wrinkle:1 cannot:3 needle:1 selection:5 bend:1 close:1 kmax:2 influence:1 applying:1 www:4 conventional:2 deterministic:1 center:1 supl:1 primitive:1 regardless:2 starting:7 duration:2 jerusalem:1 formulate:2 immediately:1 insight:1 stereotypical:1 estimator:6 rule:3 spanned:1 coordinate:2 construction:1 controlling:1 aharonov:2 ualberta:5 user:2 programming:1 element:1 velocity:2 located:4 blocking:1 bottom:1 role:2 observed:5 electrical:1 region:5 connected:1 episode:22 movement:8 highest:2 mentioned:2 complexity:1 skeleton:2 reward:11 cristianini:1 dynamic:4 solving:1 segment:2 serve:1 upon:1 basis:6 translated:1 easily:2 k0:3 various:1 surrounding:1 derivation:1 describe:1 artificial:2 hyper:3 apparent:1 quite:1 solve:1 plausible:1 say:1 drawing:1 elephant:1 ability:1 cov:1 statistic:1 gp:1 itself:1 final:4 online:2 sequence:3 indication:1 propose:2 flexibility:1 bed:1 moved:1 p:1 extending:1 sea:1 produce:1 generating:1 converges:1 object:1 depending:1 illustrate:1 andrew:1 pose:1 sole:1 progress:1 implemented:1 c:5 direction:3 exploration:1 successor:3 require:3 repertoire:1 polymer:2 sufficiently:3 considered:2 around:1 great:1 algorithmic:1 ventral:3 dictionary:5 clam:1 purpose:1 understated:1 proc:2 applicable:1 organ:3 engel:7 tool:1 mit:1 gaussian:8 reaching:4 rather:3 avoid:1 varying:1 barto:1 improvement:1 mainly:2 contrast:1 baseline:1 glass:1 economy:1 dependent:1 rigid:1 typically:1 entire:1 hidden:1 relation:1 volkinshtein:2 her:1 provably:1 dual:1 flexible:2 plan:1 constrained:1 special:1 sampling:1 biology:1 peaked:1 future:2 report:2 inherent:1 randomly:2 composed:1 tightly:1 szabo:3 consisting:2 dampening:1 attempt:1 freedom:1 organization:1 interest:2 highly:2 investigate:1 circular:2 evaluation:2 activated:1 amenable:1 accurate:1 capable:2 encourage:1 explosion:1 respective:1 modest:1 unless:1 taylor:1 rotating:6 haifa:1 circle:3 plotted:1 theoretical:1 tongue:2 instance:2 modeling:3 obstacle:6 ollendorff:1 cost:2 applicability:2 subset:1 parametrically:1 monomials:2 uniform:1 technion:2 successful:4 stored:1 reported:1 straightforwardly:1 thickness:1 considerably:1 combined:2 cre:1 international:2 sequel:1 off:3 tip:3 connecting:1 quickly:1 again:1 thesis:2 recorded:2 possibly:1 tile:1 algorithms4:1 derivative:2 sagiv:1 unvisited:1 expended:2 potential:2 attaining:1 caused:1 depends:1 ated:1 performed:4 view:1 closed:1 optimistic:1 reached:9 bayes:2 rbfs:1 air:1 compartment:9 variance:2 stretching:1 bayesian:1 produced:2 trajectory:3 worth:1 tissue:1 reach:3 against:1 energy:4 involved:1 spie:1 gain:1 frictionless:1 recall:1 knowledge:1 dimensionality:1 exerting:1 ubiquitous:1 sophisticated:1 higher:1 dt:1 day:3 isometric:2 planar:2 methodology:1 disciplined:1 attained:4 supervised:1 done:2 evaluated:1 governing:1 until:3 correlation:1 hand:1 horizontal:1 web:1 lack:1 propagation:1 defines:2 mode:1 quality:1 indicated:1 scientific:1 mdp:1 grounding:1 requiring:1 consisted:1 entering:1 symmetric:1 yekutieli:2 during:5 encourages:1 maintained:1 pdf:1 theoretic:1 demonstrate:1 complete:1 motion:10 reflection:1 passive:1 bring:1 consideration:1 endowing:1 physical:1 rl:10 cohen:1 volume:5 belong:2 resting:1 relating:1 refer:2 significant:1 cambridge:2 smoothness:1 tuning:1 session:1 shawe:1 had:1 reachable:1 access:1 actor:1 similarity:1 operating:1 base:14 posterior:5 recent:2 showed:1 rewarded:1 scenario:4 success:10 meeting:1 muscle:26 yaakov:1 additional:3 relaxed:1 c11:1 determine:2 redundant:2 clockwise:2 semi:1 dashed:2 full:2 technical:1 faster:1 england:1 plug:1 calculation:1 long:2 offer:1 biomechanics:2 dept:2 concerning:1 paired:1 controlled:1 variant:2 basic:2 scalable:1 controller:2 neuro:1 minerva:1 iteration:5 represent:1 kernel:8 robotics:3 folding:1 addition:1 whereas:2 remarkably:1 addressed:1 interval:5 rest:4 bringing:1 file:1 comment:1 induced:1 virtually:1 thing:1 practitioner:1 structural:1 near:1 noting:1 easy:1 architecture:1 reduce:1 idea:1 whether:1 handled:1 penalty:2 render:1 peter:2 proceed:1 action:32 useful:2 generally:1 clear:1 aimed:3 amount:1 nonparametric:4 discount:1 ten:1 concentrated:1 induces:1 documented:1 generate:1 reduced:1 meir:2 estimated:1 neuroscience:3 per:1 thereafter:1 four:1 demonstrating:1 drawn:2 opi:2 appendage:2 wasted:1 graph:1 immersed:1 sum:2 run:2 letter:1 almost:2 reasonable:2 throughout:1 family:2 decision:2 simplification:1 convergent:1 correspondence:1 fascinating:1 annual:1 strength:1 constraint:5 worked:1 segev:1 encodes:1 friction:1 span:1 spring:7 performing:2 relatively:2 according:1 vacuum:1 describes:1 remain:2 terminates:2 making:2 restricted:2 taken:2 equation:3 resource:1 trunk:2 mechanism:3 end:2 physiologist:2 studying:1 available:4 operation:2 stiffness:2 eight:1 limb:2 apply:3 away:1 actuator:1 incompressible:1 alternative:1 substitute:1 ensure:1 include:2 maintaining:2 sm2:1 somewhere:1 yarom:1 society:2 contact:2 move:1 already:1 added:1 question:1 parametric:1 exhibit:2 reversed:1 thank:1 simulated:2 parametrized:1 athena:1 whom:1 collected:1 water:2 length:4 aicml:2 modeled:1 code:1 index:1 hebrew:1 equivalently:1 difficult:3 negative:1 implementation:2 proper:1 policy:14 unknown:1 perform:1 allowing:1 vertical:2 observation:1 markov:1 finite:2 anti:1 inevitably:1 supporting:1 agility:1 incorporated:2 arbitrary:1 canada:1 intensity:1 transverse:3 pair:3 mechanical:1 required:3 namely:1 resist:1 learned:3 israeli:1 zohar:1 able:1 bar:1 below:2 dynamical:1 pattern:1 challenge:2 explanation:1 belief:1 video:1 difficulty:1 force:15 natural:1 indicator:1 meantime:1 arm:77 movie:2 technology:3 pry:1 temporally:1 axis:2 started:1 deemed:1 prior:4 understanding:1 relative:1 contracting:2 fully:1 interesting:1 limitation:1 proportional:3 facing:1 remarkable:1 switched:4 degree:4 principle:4 editor:1 critic:1 dissipating:1 placed:2 supported:1 tsitsiklis:1 side:3 allow:2 institute:2 face:1 sparse:1 distributed:1 benefit:1 curve:3 dimension:2 transition:1 contour:1 forward:2 made:2 reinforcement:5 employing:2 far:1 approximate:2 ignore:1 derivable:1 dmitry:1 feat:1 keep:1 robotic:5 sequentially:1 active:1 assumed:1 spatio:1 continuous:1 why:1 reality:1 nature:1 terminate:1 ca:5 controllable:1 improving:1 investigated:1 complex:4 domain:7 assured:1 octopus:26 gpq:1 linearly:3 terminated:2 noise:1 edition:1 allowed:3 repeated:1 augmented:1 site:1 edmonton:1 precision:1 experienced:1 position:2 exponential:1 lie:2 crude:1 third:2 touched:1 remained:2 inset:2 showing:1 experimented:2 physiological:1 concern:1 effectively:1 ci:1 phd:1 illustrates:1 conditioned:1 cartesian:1 jar:1 subtract:1 depicted:2 infinitely:1 hitting:1 contained:1 partially:1 lstd:1 goal:21 identity:1 flash:3 exceptionally:1 content:1 change:2 specifically:2 muscular:6 determined:1 acting:2 engineer:2 total:1 called:1 experimental:1 internal:2 mark:1 yaki:5 latter:2 arises:1 dorsal:2 meant:1 rotate:2 kernelbased:1 evaluate:3 tested:1 rigidity:1 |
1,965 | 2,786 | Oblivious Equilibrium: A Mean Field
Approximation for Large-Scale Dynamic Games
Gabriel Y. Weintraub, Lanier Benkard, and Benjamin Van Roy
Stanford University
{gweintra,lanierb,bvr}@stanford.edu
Abstract
We propose a mean-field approximation that dramatically reduces the
computational complexity of solving stochastic dynamic games. We provide conditions that guarantee our method approximates an equilibrium
as the number of agents grow. We then derive a performance bound to
assess how well the approximation performs for any given number of
agents. We apply our method to an important class of problems in applied microeconomics. We show with numerical experiments that we are
able to greatly expand the set of economic problems that can be analyzed
computationally.
1
Introduction
In this paper we consider a class of infinite horizon non-zero sum stochastic dynamic
games. At each period of time, each agent has a given state and can make a decision.
These decisions together with random shocks determine the evolution of the agents? states.
Additionally, agents receive profits depending on the current states and decisions. There
is a literature on such models which focusses on computation of Markov perfect equilibria
(MPE) using dynamic programming algorithms. A major shortcoming of, however, is the
computational complexity associated with solving for the MPE. When there are more than
a few agents participating in the game and/or more than a few states per agent, the curse of
dimensionality renders dynamic programming algorithms intractable.
In this paper we consider a class of stochastic dynamic games where the state of an agent
captures its competitive advantage. Our main motivation is to consider an important class of
models in applied economics, namely, dynamic industry models of imperfect competition.
However, we believe our methods can be useful in other contexts as well. To clarify the
type of models we consider, let us describe a specific example of a dynamic industry model.
Consider an industry where a group of firms can invest to improve the quality of their
products over time. The state of a given firm represents its quality level. The evolution
of quality is determined by investment and random shocks. Finally, at every period, given
their qualities, firms compete in the product market and receive profits. Many real world
industries where, for example, firms invest in R&D or advertising are well described by
this model.
In this context, we propose a mean-field approximation approach that dramatically simplifies the computational complexity of stochastic dynamic games. We propose a simple
algorithm for computing an ?oblivious? equilibrium in which each agent is assumed to
make decisions based only on its own state and knowledge of the long run equilibrium
distribution of states, but where agents ignore current information about rivals? states. We
prove that, if the distribution of agents obeys a certain ?light-tail? condition, when the number of agents becomes large the oblivious equilibrium approximates a MPE. We then derive
an error bound that is simple to compute to assess how well the approximation performs
for any given number of agents.
We apply our method to analyze dynamic industry models of imperfect competition. We
conduct numerical experiments that show that our method works well when there are several hundred firms, and sometimes even tens of firms. Our method, which uses simple code
that runs in a couple of minutes on a laptop computer, greatly expands the set of economic
problems that can be analyzed computationally.
2
A Stochastic Dynamic Game
In this section, we formulate a non-zero sum stochastic dynamic game. The system evolves
over discrete time periods and an infinite horizon. We index time periods with nonnegative
integers t ? N (N = {0, 1, 2, . . .}). All random variables are defined on a probability space
(?, F, P) equipped with a filtration {Ft : t ? 0}. We adopt a convention of indexing by t
variables that are Ft -measurable.
There are n agents indexed by S = {1, ..., n}. The state of each agent captures its ability
to compete in the environment. At time t, the state of agent i ? S is denoted by xit ? N.
We define the system state st to be a vector over individual states that specifies, for each
state
the number ofoagents at state x in period t. We define the state space S =
n x ? N,
P?
?
s ? N x=0 s(x) = n . For each i ? S, we define s?i,t ? S to be the state of the
competitors of agent i; that is, s?i,t (x) = st (x) ? 1 if xit = x, and s?i,t (x) = st (x),
otherwise.
In each period, each agent earns profits. An agent?s single period expected profit
?m (xit , s?i,t ) depends on its state xit , its competitors? state s?i,t and a parameter m ?
<+ . For example, in the context of an industry model, m could represent the total number
of consumers, that is, the size of the pie to be divided among all agents. We assume that for
all x ? N, s ? S, m ? <+ , ?m (x, s) > 0 and is increasing in x. Hence, agents in larger
states earn more profits.
In each period, each agent makes a decision. We interpret this decision as an investment
to improve the state at the next period. If an agent invests ?it ? <+ , then the agent?s state
at time t + 1 is given by, xi,t+1 = xit + w(?it , ?i,t+1 ), where the function w captures
the impact of investment on the state and ?i,t+1 reflects uncertainty in the outcome of
investment. For example, in the context of an industry model, uncertainty may arise due to
the risk associated with a research endeavor or a marketing campaign. We assume that for
all ?, w(?, ?) is nondecreasing in ?. Hence, if the amount invested is larger it is more likely
the agent will transit next period to a better state. The random variables {?it |t ? 0, i ? 1}
are i.i.d.. We denote the unit cost of investment by d.
Each agent aims to maximize expected net present value. The interest rate is assumed to
be positive and constant over time, resulting in a constant discount factor of ? ? (0, 1) per
time period. The equilibrium concept we will use builds on the notion of a Markov perfect
equilibrium (MPE), in the sense of [3]. We further assume that equilibrium is symmetric,
such that all agents use a common stationary strategy. In particular, there is a function ?
such that at each time t, each agent i ? S makes a decision ?it = ?(xit , s?i,t ). Let M
denote the set of strategies such that an element ? ? M is a function ? : N ? S ? <+ .
We define the value function V (x, s|?0 , ?) to be the expected net present value for an agent
at state x when its competitors? state is s, given that its competitors each follows a common
strategy ? ? M, and the agent itself follows strategy ?0 ? M. In particular,
"?
#
X
0
k?t
V (x, s|? , ?) = E?0 ,?
?
(?(xik , s?i,k ) ? d?ik ) xit = x, s?i,t = s ,
k=t
where i is taken to be the index of an agent at state x at time t, and the subscripts of
the expectation indicate the strategy followed by agent i and the strategy followed by its
competitors. In an abuse of notation, we will use the shorthand, V (x, s|?) ? V (x, s|?, ?),
to refer to the expected discounted value of profits when agent i follows the same strategy
? as its competitors.
An equilibrium to our model comprises a strategy ? ? M that satisfy the following condition:
(2.1)
sup V (x, s|?0 , ?) = V (x, s|?)
?x ? N, ?s ? S.
?0 ?M
Under some technical conditions, one can establish existence of an equilibrium in pure
strategies [4]. With respect to uniqueness, in general we presume that our model may
have multiple equilibria. Dynamic programming algorithms can be used to optimize agent
strategies, and equilibria to our model can be computed via their iterative application. However, these algorithms require compute time and memory that grow proportionately with the
number of relevant system states, which is often intractable in contexts of practical interest.
This difficulty motivates our alternative approach.
3
Oblivious Equilibrium
We will propose a method for approximating MPE based on the idea that when there are
a large number of agents, simultaneous changes in individual agent states can average out
because of a law of large numbers such that the normalized system state remains roughly
constant over time. In this setting, each agent can potentially make near-optimal decisions
based only on its own state and the long run average system state. With this motivation,
we consider restricting agent strategies so that each agent?s decisions depend only on the
agent?s state. We call such restricted strategies oblivious since they involve decisions made
without full knowledge of the circumstances ? in particular, the state of the system. Let
? ? M denote the set of oblivious strategies. Since each strategy ? ? M
? generates
M
decisions ?(x, s) that do not depend on s, with some abuse of notation, we will often drop
the second argument and write ?(x).
Let s?? be the long-run expected system state when all agents use an oblivious strategy
? For an oblivious strategy ? ? M
? we define an oblivious value function
? ? M.
"?
#
X
0
k?t
?
V (x|? , ?) = E?0
?
(?(xik , s?? ) ? d?ik ) xit = x .
k=t
This value function should be interpreted as the expected net present value of an agent that
is at state x and follows oblivious strategy ?0 , under the assumption that its competitors?
state will be s?? for all time. Again, we abuse notation by using V? (x|?) ? V? (x|?, ?)
to refer to the oblivious value function when agent i follows the same strategy ? as its
competitors.
We now define a new solution concept: an oblivious equilibrium consists of a strategy
? that satisfy the following condition:
??M
(3.1)
sup V? (x|?0 , ?) = V? (x|?),
?x ? N.
?
?0 ?M
In an oblivious equilibrium firms optimize an oblivious value function assuming that its
competitors? state will be s?? for all time. The optimal strategy obtained must be ?. It
is straightforward to show that an oblivious equilibrium exists under mild technical conditions. With respect to uniqueness, we have been unable to find multiple oblivious equilibria
in any of the applied problems we have considered, but similarly with the case of MPE, we
have no reason to believe that in general there is a unique oblivious equilibrium.
4
Asymptotic Results
In this section, we establish asymptotic results that provide conditions under which oblivious equilibria offer close approximations to MPE as the number of agents, n, grow. We
consider a sequence of systems indexed by the one period profit parameter m and we assume that the number of agents in system m is given by n(m) = am, for some a > 0.
Recall that m represents, for example, the total pie to be divided by the agents so it is
reasonable to increase n(m) and m at the same rate.
We index functions and random variables associated with system m with a superscript
(m). From this point onward we let ?
?(m) denote an oblivious equilibrium for system m.
(m)
(m)
Let V
and V?
represent the value function and oblivious value function, respectively,
when the system is m. To further abbreviate notation we denote the expected system state
(m)
associated with ?
?(m) by s?(m) ? s???(m) . The random variable st denotes the system state
at time t when every agent uses strategy ?
?(m) . We denote the invariant distribution of
(m)
(m)
{st : t ? 0} by q . In order to simplify our analysis, we assume that the initial system
(m)
(m)
(m)
state s0 is sampled from q (m) . Hence, st is a stationary process; st is distributed
(m)
(m)
according to q (m) for all t ? 0. It will be helpful to decompose st according to st =
(m)
(m) (m)
ft n , where ft is the random vector that represents the fraction of agents in each
(m)
state. Similarly, let f?(m) ? E[ft ] denote the expected fraction of agents in each state.
With some abuse of notation, we define ?m (xit , f?i,t , n) ? ?m (xit , n ? f?i,t ). We assume
P
that for all x ? N, f ? S1 , ?m (x, f, n(m) ) = ?(1), where S1 = {f ? <?
+|
x?N f (x) =
1}. If m and n(m) grow at the same rate, one period profits remain positive and bounded.
Our aim is to establish that, under certain conditions, oblivious equilibria well-approximate
MPE as m grows. We define the following concept to formalize the sense in which this
approximation becomes exact.
Definition 4.1. A sequence ?
?(m) ? M possesses the asymptotic Markov equilibrium
(AME) property if for all x ? N,
(m)
(m) (m)
lim E??(m) sup V (m) (x, st |?0 , ?
?(m) ) ? V (m) (x, st |?
? ) =0.
m??
?0 ?M
The definition of AME assesses approximation error at each agent state x in terms of the
amount by which an agent at state x can increase its expected net present value by deviating from the oblivious equilibrium strategy ?
?(m) , and instead following an optimal
(non-oblivious) best response that keeps track of the true system state. The system states
are averaged according to the invariant distribution.
It may seem that the AME property is always obtained because n(m) is growing to infinity.
However, recall that each agent state reflects its competitive advantage and if there are
agents that are too ?dominant? this is not necessarily the case. To make this idea more
concrete, let us go back to our industry example where firms invest in quality. Even when
there are a large number of firms, if the market tends to be concentrated ? for example,
if the market is usually dominated by a single firm with a an extremely high quality ?
the AME property is unlikely to hold. To ensure the AME property, we need to impose a
?light-tail? condition that rules out this kind of domination.
(y,f,n)
Note that d ln ?dfm(x)
is the semi-elasticity of one period profits with respect to the fraction
of agents in state x. We define the maximal absolute semi-elasticity function:
d ln ?m (y, f, n)
.
g(x) =
max
m?<+ ,y?N,f ?S1 ,n?N
df (x)
For each x, g(x) is the maximum rate of relative change of any agent?s single-period profit
that could result from a small change in the fraction of agents at state x. Since larger
competitors tend to have greater influence on agent profits, g(x) typically increases with x,
and can be unbounded.
Finally, we introduce our light-tail condition. For each m, let x
?(m) ? f?(m) , that is, x
?(m)
(m)
(m)
?
is a random variable with probability mass function f . x
?
can be interpreted as the
state of an agent that is randomly sampled from among all agents while the system state is
distributed according to its invariant distribution.
Assumption 4.1. For all states x, g(x) < ?. For all > 0, there exists a state z such that
h
i
E g(?
x(m) )1{?x(m) >z} ? ,
for all m.
Put simply, the light tail condition requires that states where a small change in the fraction
of agents has a large impact on the profits of other agents, must have a small probability
under the invariant distribution. In the previous example of an industry where firms invest
in quality this typically means that very large firms (and hence high concentration) rarely
occur under the invariant distribution.
Theorem 4.1. Under Assumption 4.1 and some other regularity conditions1 , the sequence
?
?(m) of oblivious equilibria possesses the AME property.
5
Error Bounds
While the asymptotic results from Section 4 provide conditions under which the approximation will work well as the number of agents grows, in practice one would also like to
know how the approximation performs for a particular system. For that purpose we derive
performance bounds on the approximation error that are simple to compute via simulation and can be used to asses the accuracy of the approximation for a particular problem
instance.
We consider a system m and to simplify notation we suppress the index m. Consider an
oblivious
strategy ?
?. We will quantify approximation
error at each agent state x ? N by
E sup?0 ?M V (x, st |?0 , ?
?) ? V (x, st |?
?) . The expectation is over the invariant distribution of st . The next theorem provides a bound on the approximation error. Recall that s?
is the long run expected state in oblivious equilibrium (E[st ]). Let ax (y) be the expected
discounted sum of an indicator of visits to state y for an agent starting at state x that uses
strategy ?
?.
Theorem 5.1. For any oblivious equilibrium ?
? and state x ? N,
X
1
E[??(st )] +
ax (y) (?(y, s?) ? E [?(y, st )]) ,
(5.1)
E [?V ] ?
1??
y?N
1
In particular, we require that the single period profit function is ?smooth? as a function of its
arguments. See [5] for details.
where ?V
=
sup?0 ?M V (x, st |?0 , ?
?) ? V (x, st |?
?)
maxy?N (?(y, s) ? ?(y, s?)).
and
??(s)
=
The error bound can be easily estimated via simulation algorithms. In particular, note that
the bound is not a function of the true MPE or even of the optimal non-oblivious best
response strategy.
6
Application: Industry Dynamics
Many problems in applied economics are dynamic in nature. For example, models involving the entry and exit of firms, collusion among firms, mergers, advertising, investment in
R&D or capacity, network effects, durable goods, consumer learning, learning by doing,
and transaction or adjustment costs are inherently dynamic. [1] (hereafter EP) introduced
an approach to modeling industry dynamics. See [6] for an overview. Computational complexity has been a limiting factor in the use of this modeling approach. In this section we
use our method to expand the set of dynamic industries that can be analyzed computationally.
Even though our results apply to more general models where for example firms make exit
and entry decisions, here we consider a particular case of an EP model which itself is a
particular case of the model introduced in Section 2. We consider a model of a single-good
industry with quality differentiation. The agents are firms that can invest to improve the
quality of their product over time. In particular xit is the quality level of firm i at time t.
?it represents represents the amount of money invested by firm i at time t to improve its
quality. We assume the one period profit function is derived from a logit demand system
and where firms compete setting prices. In this case, m represents the market size. See [5]
for more details about the model.
6.1
Computational Experiments
In this section, we discuss computational results that demonstrate how our approximation
method significantly expands the range of relevant EP-type models like the one previously
introduced that can be studied computationally.
First, we propose an algorithm to compute oblivious equilibrium [5]. Whether this algorithm is guaranteed to terminate in a finite number of iterations remains an open issue.
However, in over 90% of the numerical experiments we present in this section, it converged
in less than five minutes (and often much less than this). In the rest, it converged in less
than fifteen minutes.
Our first set of results investigate the behavior of the approximation error bound under
several different model specifications. A wide range of parameters for our model could
reasonably represent different real world industries of interest. In practice the parameters
would either be estimated using data from a particular industry or chosen to reflect an
industry under study. We begin by investigating a particular set of representative parameter
values. See [5] for the specifications.
For each set of parameters, we use the approximation error bound to compute an upper
E[sup?0 ?M V (x,s|?0 ,?
?)?V (x,s|?
?)]
bound on the percentage error in the value function,
, where
E[V (x,s|?
?)]]
?
? is the OE strategy and the expectations are taken with respect to s. We estimate the
expectations using simulation. We compute the previously mentioned percentage approximation error bound for different market sizes m and number of firms n(m) . As the market
size increases, the number of firms increases and the approximation error bound decreases.
In our computational experiments we found that the most important parameter affecting
the approximation error bounds was the degree of vertical product differentiation, which
indicates the importance consumers assign to product quality. In Figure 1 we present our
results. When the parameter that measures the level of vertical differentiation is low the
approximation error bound is less than 0.5% with just 5 firms, while when the parameter is
high it is 5% for 5 firms, less than 3% with 40 firms, and less than 1% with 400 firms.
Figure 1: Percentage approximation error bound for fixed number of firms.
Most economic applications would involve from less than ten to several hundred firms.
These results show that the approximation error bound may sometimes be small (<2%) in
these cases, though this would depend on the model and parameter values for the industry
under study.
Having gained some insight into what features of the model lead to low values of the
approximation error bound, the question arises as to what value of the error bounds is
required to obtain a good approximation. To shed light on this issue we compare long-run
statistics for the same industry primitives under oblivious equilibrium and MPE strategies.
A major constraint on this exercise is that it requires the ability to actually compute the
MPE, so to keep computation manageable we use four firms here. We compare the average
values of several economic statistics of interest under the oblivious equilibrium and the
MPE invariant distributions. The quantities compared are: average investment, average
producer surplus, average consumer surplus, average share of the largest firm, and average
share of the largest two firms. We also computed the actual benefit from deviating and
E[sup?0 ?M V (x,s|?0 ,?
?)?V (x,s|?
?)]
).
keeping track of the industry state (the actual difference
E[V (x,s|?
?)]]
Note that the the latter quantity should always be smaller than the approximation error
bound.
From the computational experiments we conclude the following (see [5] for a table with
the results):
1. When the bound is less than 1% the long-run quantities estimated under oblivious
equilibrium and MPE strategies are very close.
2. Performance of the approximation depends on the richness of the equilibrium investment process. Industries with a relatively low cost of investment tend to have
a symmetric average distribution over quality levels reflecting a rich investment
process. In this cases, when the bound is between 1-20%, the long-run quantities
estimated under oblivious equilibrium and MPE strategies are still quite close. In
industries with high investment cost the industry (system) state tends to be skewed,
reflecting low levels of investment. When the bound is above 1% and there is little
investment, the long-run quantities can be quite different on a percentage basis
(5% to 20%), but still remain fairly close in absolute terms.
3. The performance bound is not tight. For a wide range of parameters the performance bound is as much as 10 to 20 times larger than the actual benefit from
deviating.
The previous results suggest that MPE dynamics are well-approximated by oblivious equilibrium strategies when the approximation error bound is small (less than 1-2% and in some
cases even up to 20 %). Our results demonstrate that the oblivious equilibrium approximation significantly expands the range of applied problems that can be analyzed computationally.
7
Conclusions and Future Research
The goal of this paper has been to increase the set of applied problems that can be addressed
using stochastic dynamic games. Due to the curse of dimensionality, the applicability of
these models has been severely limited. As an alternative, we proposed a method for approximating MPE behavior using an oblivious equilibrium, where agents make decisions
only based on their own state and the long run average system state. We began by showing that the approximation works well asymptotically, where asymptotics were taken in
the number of agents. We also introduced a simple algorithm to compute an oblivious
equilibrium.
To facilitate using oblivious equilibrium in practice, we derived approximation error
bounds that indicate how good the approximation is in any particular problem under study.
These approximation error bounds are quite general and thus can be used in a wide class of
models. We use our methods to analyze dynamic industry models of imperfect competition
and showed that oblivious equilibrium often yields a good approximation of MPE behavior
for industries with a couple hundred firms, and sometimes even with just tens of firms.
We have considered very simple strategies that are functions only of an agent?s own state
and the long run average system state. While our results show that these simple strategies
work well in many cases, there remains a set of problems where exact computation is not
possible and yet our approximation will not work well either. For such cases, our hope
is that our methods will serve as a basis for developing better approximations that use
additional information, such as the states of the dominant agents. Solving for equilibria
of this type would be more difficult than solving for oblivious equilibria, but is still likely
to be computationally feasible. Since showing that such an approach would provide a
good approximation is not a simple extension of our results, this will be a subject of future
research.
References
[1] R. Ericson and A. Pakes. Markov-perfect industry dynamics: A framework for empirical work. Review of Economic Studies, 62(1):53 ? 82, 1995.
[2] R. L. Goettler, C. A. Parlour, and U. Rajan. Equilibrium in a dynamic limit order
market. Forthcoming, Journal of Finance, 2004.
[3] E. Maskin and J. Tirole. A theory of dynamic oligopoly, I and II. Econometrica,
56(3):549 ? 570, 1988.
[4] U. Doraszelski and M. Satterthwaite. Foundations of Markov-perfect industry dynamics: Existence, purification, and multiplicity. Working Paper, Hoover Institution, 2003.
[5] G. Y. Weintraub, C. L. Benkard, and B. Van Roy. Markov perfect industry dynamics
with many firms. Submitted ofr publication, 2005.
[6] A. Pakes. A framework for applied dynamic analysis in i.o. NBER Working Paper
8024, 2000.
| 2786 |@word mild:1 manageable:1 logit:1 open:1 simulation:3 fifteen:1 profit:14 initial:1 hereafter:1 current:2 yet:1 must:2 numerical:3 drop:1 stationary:2 merger:1 benkard:2 institution:1 provides:1 five:1 unbounded:1 ik:2 prove:1 shorthand:1 consists:1 introduce:1 expected:11 market:7 behavior:3 roughly:1 growing:1 discounted:2 actual:3 curse:2 equipped:1 little:1 increasing:1 becomes:2 begin:1 notation:6 bounded:1 laptop:1 mass:1 what:2 kind:1 interpreted:2 differentiation:3 guarantee:1 every:2 expands:3 shed:1 finance:1 unit:1 positive:2 tends:2 limit:1 severely:1 subscript:1 abuse:4 studied:1 campaign:1 limited:1 range:4 obeys:1 averaged:1 practical:1 unique:1 investment:13 practice:3 asymptotics:1 empirical:1 significantly:2 suggest:1 close:4 put:1 context:5 risk:1 influence:1 optimize:2 measurable:1 straightforward:1 economics:2 go:1 starting:1 primitive:1 satterthwaite:1 formulate:1 pure:1 rule:1 insight:1 notion:1 limiting:1 exact:2 programming:3 us:3 element:1 roy:2 approximated:1 ep:3 ft:5 capture:3 richness:1 oe:1 decrease:1 mentioned:1 benjamin:1 environment:1 complexity:4 econometrica:1 dynamic:27 depend:3 solving:4 tight:1 serve:1 exit:2 basis:2 easily:1 shortcoming:1 describe:1 outcome:1 firm:33 quite:3 oligopoly:1 stanford:2 larger:4 otherwise:1 ability:2 statistic:2 invested:2 nondecreasing:1 itself:2 superscript:1 advantage:2 sequence:3 net:4 propose:5 product:5 maximal:1 relevant:2 participating:1 competition:3 invest:5 regularity:1 perfect:5 derive:3 depending:1 indicate:2 convention:1 quantify:1 stochastic:7 require:2 assign:1 hoover:1 decompose:1 extension:1 clarify:1 onward:1 hold:1 considered:2 equilibrium:42 major:2 adopt:1 purpose:1 uniqueness:2 largest:2 reflects:2 hope:1 always:2 aim:2 publication:1 ax:2 focus:1 xit:11 derived:2 indicates:1 greatly:2 sense:2 am:1 helpful:1 durable:1 pakes:2 unlikely:1 typically:2 dfm:1 expand:2 issue:2 among:3 denoted:1 fairly:1 field:3 having:1 represents:6 future:2 simplify:2 oblivious:40 few:2 producer:1 randomly:1 individual:2 deviating:3 interest:4 investigate:1 analyzed:4 light:5 elasticity:2 conduct:1 indexed:2 instance:1 industry:27 modeling:2 cost:4 applicability:1 entry:2 hundred:3 too:1 st:19 together:1 concrete:1 earn:1 again:1 reflect:1 tirole:1 satisfy:2 depends:2 mpe:17 analyze:2 sup:7 competitive:2 doing:1 ass:4 accuracy:1 purification:1 yield:1 advertising:2 presume:1 converged:2 submitted:1 simultaneous:1 definition:2 competitor:10 weintraub:2 associated:4 couple:2 sampled:2 recall:3 knowledge:2 lim:1 dimensionality:2 formalize:1 actually:1 back:1 surplus:2 reflecting:2 response:2 though:2 marketing:1 just:2 working:2 quality:13 lanier:1 grows:2 believe:2 nber:1 facilitate:1 effect:1 concept:3 normalized:1 true:2 evolution:2 hence:4 symmetric:2 game:9 skewed:1 demonstrate:2 performs:3 began:1 common:2 overview:1 tail:4 approximates:2 interpret:1 refer:2 similarly:2 specification:2 money:1 dominant:2 own:4 showed:1 certain:2 ofr:1 greater:1 additional:1 impose:1 determine:1 maximize:1 period:17 semi:2 ii:1 multiple:2 full:1 reduces:1 smooth:1 technical:2 offer:1 long:10 divided:2 visit:1 impact:2 involving:1 circumstance:1 expectation:4 df:1 iteration:1 sometimes:3 represent:3 receive:2 affecting:1 addressed:1 grow:4 rest:1 posse:2 subject:1 tend:2 seem:1 integer:1 call:1 near:1 forthcoming:1 earns:1 economic:5 imperfect:3 simplifies:1 idea:2 whether:1 render:1 dramatically:2 gabriel:1 useful:1 involve:2 amount:3 discount:1 rival:1 ten:3 concentrated:1 specifies:1 percentage:4 estimated:4 per:2 track:2 discrete:1 write:1 ame:6 group:1 rajan:1 four:1 shock:2 asymptotically:1 fraction:5 sum:3 compete:3 run:11 uncertainty:2 reasonable:1 decision:13 bound:27 followed:2 guaranteed:1 microeconomics:1 nonnegative:1 occur:1 infinity:1 constraint:1 collusion:1 dominated:1 generates:1 argument:2 extremely:1 relatively:1 developing:1 according:4 remain:2 smaller:1 evolves:1 s1:3 maxy:1 restricted:1 indexing:1 invariant:7 multiplicity:1 taken:3 computationally:6 ln:2 remains:3 previously:2 discus:1 know:1 apply:3 alternative:2 existence:2 denotes:1 ensure:1 build:1 establish:3 approximating:2 question:1 quantity:5 strategy:32 concentration:1 unable:1 capacity:1 bvr:1 transit:1 reason:1 consumer:4 assuming:1 code:1 index:4 pie:2 difficult:1 potentially:1 xik:2 filtration:1 suppress:1 motivates:1 upper:1 vertical:2 markov:6 finite:1 introduced:4 namely:1 required:1 able:1 usually:1 max:1 memory:1 difficulty:1 indicator:1 abbreviate:1 improve:4 review:1 literature:1 asymptotic:4 law:1 relative:1 foundation:1 agent:69 degree:1 s0:1 share:2 invests:1 keeping:1 wide:3 absolute:2 van:2 distributed:2 benefit:2 world:2 rich:1 made:1 transaction:1 approximate:1 ignore:1 keep:2 investigating:1 assumed:2 conclude:1 xi:1 iterative:1 table:1 additionally:1 nature:1 terminate:1 reasonably:1 inherently:1 necessarily:1 main:1 motivation:2 arise:1 representative:1 comprises:1 exercise:1 proportionately:1 minute:3 theorem:3 specific:1 showing:2 intractable:2 exists:2 restricting:1 importance:1 gained:1 horizon:2 demand:1 simply:1 likely:2 adjustment:1 doraszelski:1 goal:1 endeavor:1 price:1 feasible:1 change:4 infinite:2 determined:1 total:2 domination:1 rarely:1 latter:1 arises:1 |
1,966 | 2,787 | Subsequence Kernels for Relation Extraction
Razvan C. Bunescu
Department of Computer Sciences
University of Texas at Austin
1 University Station C0500
Austin, TX 78712
[email protected]
Raymond J. Mooney
Department of Computer Sciences
University of Texas at Austin
1 University Station C0500
Austin, TX 78712
[email protected]
Abstract
We present a new kernel method for extracting semantic relations between entities in natural language text, based on a generalization of subsequence kernels. This kernel uses three types of subsequence patterns
that are typically employed in natural language to assert relationships
between two entities. Experiments on extracting protein interactions
from biomedical corpora and top-level relations from newspaper corpora
demonstrate the advantages of this approach.
1
Introduction
Information Extraction (IE) is an important task in natural language processing, with many
practical applications. It involves the analysis of text documents, with the aim of identifying
particular types of entities and relations among them. Reliably extracting relations between
entities in natural-language documents is still a difficult, unsolved problem. Its inherent
difficulty is compounded by the emergence of new application domains, with new types
of narrative that challenge systems developed for other, well-studied domains. Traditionally, IE systems have been trained to recognize names of people, organizations, locations
and relations between them (MUC [1], ACE [2]). For example, in the sentence ?protesters
seized several pumping stations?, the task is to identify a L OCATED AT relationship between protesters (a P ERSON entity) and stations (a L OCATION entity). Recently, substantial resources have been allocated for automatically extracting information from biomedical
corpora, and consequently much effort is currently spent on automatically identifying biologically relevant entities, as well as on extracting useful biological relationships such as
protein interactions or subcellular localizations. For example, the sentence ?TR6 specifically binds Fas ligand?, asserts an interaction relationship between the two proteins TR6
and Fas ligand. As in the case of the more traditional applications of IE, systems based on
manually developed extraction rules [3, 4] were soon superseded by information extractors
learned through training on supervised corpora [5, 6]. One challenge posed by the biological domain is that current systems for doing part-of-speech (POS) tagging or parsing do
not perform as well on the biomedical narrative as on the newspaper corpora on which they
were originally trained. Consequently, IE systems developed for biological corpora need
to be robust to POS or parsing errors, or to give reasonable performance using shallower
but more reliable information, such as chunking instead of parsing.
Motivated by the task of extracting protein-protein interactions from biomedical corpora,
we present a generalization of the subsequence kernel from [7] that works with sequences
containing combinations of words and word classes. This generalized kernel is further
tailored for the task of relation extraction. Experimental results show that the new relation
kernel outperforms two previous rule-based methods for interaction extraction. With a
small modification, the same kernel is used for extracting top-level relations from ACE
corpora, providing better results than a recent approach based on dependency tree kernels.
2
Background
One of the first approaches to extracting protein interactions is that of Blaschke et al., described in [3, 4]. Their system is based on a set of manually developed rules, where each
rule (or frame) is a sequence of words (or POS tags) and two protein-name tokens. Between every two adjacent words is a number indicating the maximum number of intervening words allowed when matching the rule to a sentence. An example rule is ?interaction
of (3) <P> (3) with (3) <P>?, where ?<P>? is used to denote a protein name. A sentence matches the rule if and only if it satisfies the word constraints in the given order and
respects the respective word gaps.
In [6] the authors described a new method ELCS (Extraction using Longest Common Subsequences) that automatically learns such rules. ELCS? rule representation is similar to
that in [3, 4], except that it currently does not use POS tags, but allows disjunctions of
words. An example rule learned by this system is ?- (7) interaction (0) [between | of]
(5) <P> (9) <P> (17) .?. Words in square brackets separated by ?|? indicate disjunctive
lexical constraints, i.e. one of the given words must match the sentence at that position.
The numbers in parentheses between adjacent constraints indicate the maximum number
of unconstrained words allowed between the two.
3
Extraction using a Relation Kernel
Both Blaschke and ELCS do interaction extraction based on a limited set of matching
rules, where a rule is simply a sparse (gappy) subsequence of words or POS tags anchored
on the two protein-name tokens. Therefore, the two methods share a common limitation:
either through manual selection (Blaschke), or as a result of the greedy learning procedure
(ELCS), they end up using only a subset of all possible anchored sparse subsequences.
Ideally, we would want to use all such anchored sparse subsequences as features, with
weights reflecting their relative accuracy. However explicitly creating for each sentence a
vector with a position for each such feature is infeasible, due to the high dimensionality
of the feature space. Here we can exploit dual learning algorithms that process examples
only via computing their dot-products, such as the Support Vector Machines (SVMs) [8].
Computing the dot-product between two such vectors amounts to calculating the number
of common anchored subsequences between the two sentences. This can be done very
efficiently by modifying the dynamic programming algorithm used in the string kernel
from [7] to account only for common sparse subsequences constrained to contain the two
protein-name tokens. We further prune down the feature space by utilizing the following
property of natural language statements: when a sentence asserts a relationship between
two entity mentions, it generally does this using one of the following three patterns:
? [FB] Fore?Between: words before and between the two entity mentions are simultaneously used to express the relationship. Examples: ?interaction of hP1 i with hP2 i?, ?activation of hP1 i by hP2 i?.
? [B] Between: only words between the two entities are essential for asserting the relationship. Examples: ?hP1 i interacts with hP2 i?, ?hP1 i is activated by hP2 i?.
? [BA] Between?After: words between and after the two entity mentions are simultaneously used to express the relationship. Examples: ?hP1 i ? hP2 i complex?, ?hP1 i and hP2 i
interact?.
Another observation is that all these patterns use at most 4 words to express the relationship
(not counting the two entity names). Consequently, when computing the relation kernel,
we restrict the counting of common anchored subsequences only to those having one of
the three types described above, with a maximum word-length of 4. This type of feature
selection leads not only to a faster kernel computation, but also to less overfitting, which
results in increased accuracy (see Section 5 for comparative experiments).
The patterns enumerated above are completely lexicalized and consequently their performance is limited by data sparsity. This can be alleviated by categorizing words into classes
with varying degrees of generality, and then allowing patterns to use both words and their
classes. Examples of word classes are POS tags and generalizations over POS tags such as
Noun, Active Verb or Passive Verb. The entity type can also be used, if the word is part of
a known named entity, as well as the type of the chunk containing the word, when chunking information is available. Content words such as nouns and verbs can also be related to
their synsets via WordNet. Patterns then will consist of sparse subsequences of words, POS
tags, general POS (GPOS) tags, entity and chunk types, or WordNet synsets. For example,
?Noun of hP1 i by hP2 i? is an FB pattern based on words and general POS tags.
4
Subsequence Kernels for Relation Extraction
We are going to show how to compute the relation kernel described in the previous section
in two steps. First, in Section 4.1 we present a generalization of the subsequence kernel
from [7]. This new kernel works with patterns construed as mixtures of words and word
classes. Based on this generalized subsequence kernel, in Section 4.2 we formally define
and show the efficient computation of the relation kernel used in our experiments.
4.1
A Generalized Subsequence Kernel
Let ?1 , ?2 , ..., ?k be some disjoint feature spaces. Following the example in Section 3,
?1 could be the set of words, ?2 the set of POS tags, etc. Let ?? = ?1 ? ?2 ? ... ? ?k
be the set of all possible feature vectors, where a feature vector would be associated with
each position in a sentence. Given two feature vectors x, y ? ?? , let c(x, y) denote the
number of common features between x and y. The next notation follows that introduced
in [7]. Thus, let s, t be two sequences over the finite set ?? , and let |s| denote the length
of s = s1 ...s|s| . The sequence s[i : j] is the contiguous subsequence si ...sj of s. Let
i = (i1 , ..., i|i| ) be a sequence of |i| indices in s, in ascending order. We define the length
l(i) of the index sequence i in s as i|i| ? i1 + 1. Similarly, j is a sequence of |j| indices in t.
Let ?? = ?1 ? ?2 ? ... ? ?k be the set of all possible features. We say that the sequence
u ? ??? is a (sparse) subsequence of s if there is a sequence of |u| indices i such that
uk ? sik , for all k = 1, ..., |u|. Equivalently, we write u ? s[i] as a shorthand for the
component-wise ??? relationship between u and s[i].
Finally, let Kn (s, t, ?) (Equation 1) be the number of weighted sparse subsequences u of
length n common to s and t (i.e. u ? s[i], u ? t[j]), where the weight of u is ?l(i)+l(j) , for
some ? ? 1.
X X X
Kn (s, t, ?) =
?l(i)+l(j)
(1)
u??n
? i:u?s[i] j:u?t[j]
Because for two fixed index sequences
i and j, both of length n, the size of the set
Qn
{u ? ?n? |u ? s[i], u ? t[j]} is k=1 c(sik , tjk ), then we can rewrite Kn (s, t, ?) as in
Equation 2:
n
X X Y
c(sik , tjk )?l(i)+l(j)
(2)
Kn (s, t, ?) =
i:|i|=n j:|j|=n k=1
We use ? as a decaying factor that penalizes longer subsequences. For sparse subsequences,
this means that wider gaps will be penalized more, which is exactly the desired behavior
for our patterns. Through them, we try to capture head-modifier dependencies that are
important for relation extraction; for lack of reliable dependency information, the larger
the word gap is between two words, the less confident we are in the existence of a headmodifier relationship between them.
?
To enable an efficient computation of Kn , we use the auxiliary function Kn with a similar
definition as Kn , the only difference being that it counts the length from the beginning of
the particular subsequence u to the end of the strings s and t, as illustrated in Equation 3:
?
X
Kn (s, t, ?) =
X
X
?|s|+|t|?i1 ?j1 +2
(3)
u??n
? i:u?s[i] j:u?t[j]
?
An equivalent formula for Kn (s, t, ?) is obtained by changing the exponent of ? from
Equation 2 to |s| + |t| ? i1 ? j1 + 2.
Based on all definitions above, Kn can be computed in O(kn|s||t|) time, by modifying the
recursive computation from [7] with the new factor c(x, y), as shown in Figure 1. In this
figure, the sequence sx is the result of appending x to s (with ty defined in a similar way).
To avoid clutter, the parameter ? is not shown in the argument list of K and K ? , unless it
is instantiated to a specific constant.
?
K0 (s, t)
??
Ki (sx, ty)
?
Ki (sx, t)
Kn (sx, t)
=
1, f or all s, t
=
?Ki (sx, t) + ?2 Ki?1 (s, t) ? c(x, y)
=
?Ki (s, t) + Ki (sx, t)
=
??
?
?
??
Kn (s, t) +
X
?
?2 Kn?1 (s, t[1 : j ? 1]) ? c(x, t[j])
j
Figure 1: Computation of subsequence kernel.
4.2
Computing the Relation Kernel
As described in Section 2, the input consists of a set of sentences, where each sentence contains exactly two entities (protein names in the case of interaction extraction). In Figure 2
we show the segments that will be used for computing the relation kernel between two example sentences s and t. In sentence s for instance, x1 and x2 are the two entities, sf is the
sentence segment before x1 , sb is the segment between x1 and x2 , and sa is the sentence
?
segment after x2 . For convenience, we also include the auxiliary segment sb = x1 sb x2 ,
?
whose span is computed as l(sb ) = l(sb ) + 2 (in all length computations, we consider x1
and x2 as contributing one unit only).
sb
sf
x1
s =
sa
x2
s?b
tf
t =
tb
y1
ta
y2
t?b
Figure 2: Sentence segments.
The relation kernel computes the number of common patterns between two sentences s and
t, where the set of patterns is restricted to the three types introduced in Section 3. Therefore,
the kernel rK(s, t) is expressed as the sum of three sub-kernels: f bK(s, t) counting the
rK(s, t)
=
f bK(s, t) + bK(s, t) + baK(s, t)
bKi (s, t)
=
Ki (sb , tb , 1) ? c(x1 , y1 ) ? c(x2 , y2 ) ? ?l(sb )+l(tb )
f bK(s, t)
=
X
?
?
bKi (s, t) ? Kj (sf , tf ),
?
1 ? i, 1 ? j, i + j < fbmax
i,j
bK(s, t)
=
X
bKi (s, t),
1 ? i ? bmax
i
baK(s, t)
=
X
?
?
bKi (s, t) ? Kj (s?
a , ta ),
1 ? i, 1 ? j, i + j < bamax
i,j
Figure 3: Computation of relation kernel.
number of common fore?between patterns, bK(s, t) for between patterns, and baK(s, t)
for between?after patterns, as in Figure 3.
All three sub-kernels include in their computation the counting of common subsequences
?
?
between sb and tb . In order to speed up the computation, all these common counts can be
calculated separately in bKi , which is defined as the number of common subsequences of
?
?
length i between sb and tb , anchored at x1 /x2 and y1 /y2 respectively (i.e. constrained to
start at x1 /y1 and to end at x2 /y2 ). Then f bK simply counts the number of subsequences
that match j positions before the first entity and i positions between the entities, constrained
to have length less than a constant f bmax . To obtain a similar formula for baK we simply
?
use the reversed (mirror) version of segments sa and ta (e.g. s?
a and ta ). In Section 3
we observed that all three subsequence patterns use at most 4 words to express a relation,
?
therefore we set constants f bmax , bmax and bamax to 4. Kernels K and K are computed
using the procedure described in Section 4.1.
5
Experimental Results
The relation kernel (ERK) is evaluated on the task of extracting relations from two corpora with different types of narrative, which are described in more detail in the following
sections. In both cases, we assume that the entities and their labels are known. All preprocessing steps ? sentence segmentation, tokenization, POS tagging and chunking ? were
performed using the OpenNLP1 package. If a sentence contains n entities (n ? 2), it
is replicated into n2 sentences, each containing only two entities. If the two entities are
known to be in a relationship, then the replicated sentence is added to the set of corresponding positive sentences, otherwise it is added to the set of negative
sentences. During testing,
a sentence having n entities (n ? 2) is again replicated into n2 sentences in a similar way.
The relation kernel is used in conjunction with SVM learning in order to find a decision
hyperplane that best separates the positive examples from negative examples. We modified the LibSVM2 package by plugging in the kernel described above. In all experiments,
the decay factor ? is set to 0.75. The performance is measured using precision (percentage of correctly extracted relations out of total extracted) and recall (percentage of correctly extracted relations out of total number of relations annotated in the corpus). When
PR curves are reported, the precision and recall are computed using output from 10-fold
cross-validation. The graph points are obtained by varying a threshold on the minimum
acceptable extraction confidence, based on the probability estimates from LibSVM.
1
2
URL: http://opennlp.sourceforge.net
URL:http://www.csie.ntu.edu.tw/?cjlin/libsvm/
5.1
Interaction Extraction from AImed
We did comparative experiments on the AImed corpus, which has been previously used
for training the protein interaction extraction systems in [6]. It consists of 225 Medline
abstracts, of which 200 are known to describe interactions between human proteins, while
the other 25 do not refer to any interaction. There are 4084 protein references and around
1000 tagged interactions in this dataset.
We compare the following three systems on the task of retrieving protein interactions from
AImed (assuming gold standard proteins):
? [Manual]: We report the performance of the rule-based system of [3, 4].
? [ELCS]: We report the 10-fold cross-validated results from [6] as a PR graph.
? [ERK]: Based on the same splits as ELCS, we compute the corresponding precisionrecall graph. In order to have a fair comparison with the other two systems, which use only
lexical information, we do not use any word classes here.
The results, summarized in Figure 4(a), show that the relation kernel outperforms both
ELCS and the manually written rules.
100
100
ERK
Manual
ELCS
ERK
ERK-A
90
80
80
70
70
Precision (%)
Precision (%)
90
60
50
40
60
50
40
30
30
20
20
10
10
0
0
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
Recall (%)
(a) ERK vs. ELCS
40
50
60
70
80
90
100
Recall (%)
(b) ERK vs. ERK-A
Figure 4: PR curves for interaction extractors.
To evaluate the impact that the three types of patterns have on performance, we compare
ERK with an ablated system (ERK-A) that uses all possible patterns, constrained only to
be anchored on the two entity names. As can be seen in Figure 4(b), the three patterns (FB,
B, BA) do lead to a significant increase in performance, especially for higher recall levels.
5.2
Relation Extraction from ACE
To evaluate how well this relation kernel ports to other types of narrative, we applied it
to the problem of extracting top-level relations from the ACE corpus [2], the version used
for the September 2002 evaluation. The training part of this dataset consists of 422 documents, with a separate set of 97 documents allocated for testing. This version of the
ACE corpus contains three types of annotations: coreference, named entities and relations. There are five types of entities ? P ERSON, O RGANIZATION, FACILITY, L OCATION,
and G EO -P OLITICAL E NTITY ? which can participate in five general, top-level relations:
ROLE, PART, L OCATED, N EAR, and S OCIAL. A recent approach to extracting relations
is described in [9]. The authors use a generalized version of the tree kernel from [10] to
compute a kernel over relation examples, where a relation example consists of the smallest
dependency tree containing the two entities of the relation. Precision and recall values are
reported for the task of extracting the 5 top-level relations in the ACE corpus under two
different scenarios:
? [S1] This is the classic setting: one multi-class SVM is learned to discriminate among
the 5 top-level classes, plus one more class for the no-relation cases.
? [S2] One binary SVM is trained for relation detection, meaning that all positive relation
instances are combined into one class. The thresholded output of this binary classifier is
used as training data for a second multi-class SVM, trained for relation classification.
We trained our relation kernel, under the first scenario, to recognize the same 5 top-level
relation types. While for interaction extraction we used only the lexicalized version of the
kernel, here we utilize more features, corresponding to the following feature spaces: ?1
is the word vocabulary, ?2 is the set of POS tags, ?3 is the set of generic POS tags, and
?4 contains the 5 entity types. We also used chunking information as follows: all (sparse)
subsequences were created exclusively from the chunk heads, where a head is defined as
the last word in a chunk. The same criterion was used for computing the length of a subsequence ? all words other than head words were ignored. This is based on the observation
that in general words other than the chunk head do not contribute to establishing a relationship between two entities outside of that chunk. One exception is when both entities in the
example sentence are contained in the same chunk. This happens very often due to nounnoun (?U.S. troops?) or adjective-noun (?Serbian general?) compounds. In these cases, we
let one chunk contribute both entity heads. Also, an important difference from the interaction extraction case is that often the two entities in a relation do not have any words
separating them, as for example in noun-noun compounds. None of the three patterns from
Section 3 capture this type of dependency, therefore we introduced a fourth type of pattern,
the modifier pattern M. This pattern consists of a sequence of length two formed from the
head words (or their word classes) of the two entities. Correspondingly, we updated the
relation kernel from Figure 3 with a new kernel term mK, as illustrated in Equation 4.
rK(s, t) = f bK(s, t) + bK(s, t) + baK(s, t) + mK(s, t)
(4)
The sub-kernel mK corresponds to a product of counts, as shown in Equation 5.
mK(s, t) = c(x1 , y1 ) ? c(x2 , y2 ) ? ?2+2
(5)
We present in Table 1 the results of using our updated relation kernel to extract relations
from ACE, under the first scenario. We also show the results presented in [9] for their best
performing kernel K4 (a sum between a bag-of-words kernel and the dependency kernel)
under both scenarios.
Table 1: Extraction Performance on ACE.
Method
Precision Recall F-measure
(S1) ERK 73.9
35.2
47.7
(S1) K4
70.3
26.3
38.0
(S2) K4
67.1
35.0
45.8
Even though it uses less sophisticated syntactic and semantic information, ERK in S1 significantly outperforms the dependency kernel. Also, ERK already performs a few percentage points better than K4 in S2. Therefore we expect to get an even more significant
increase in performance by training our relation kernel in the same cascaded fashion.
6
Related Work
In [10], a tree kernel is defined over shallow parse representations of text, together with
an efficient algorithm for computing it. Experiments on extracting P ERSON -A FFILIATION
and O RGANIZATION -L OCATION relations from 200 news articles show the advantage of
using this new type of tree kernels over three feature-based algorithms. The same kernel
was slightly generalized in [9] and applied on dependency tree representations of sentences,
with dependency trees being created from head-modifier relationships extracted from syntactic parse trees. Experimental results show a clear win of the dependency tree kernel over
a bag-of-words kernel. However, in a bag-of-words approach the word order is completely
lost. For relation extraction, word order is important, and our experimental results support
this claim ? all subsequence patterns used in our approach retain the order between words.
The tree kernels used in the two methods above are opaque in the sense that the semantics
of the dimensions in the corresponding Hilbert space is not obvious. For subsequence
kernels, the semantics is known by definition: each subsequence pattern corresponds to
a dimension in the Hilbert space. This enabled us to easily restrict the types of patterns
counted by the kernel to the three types that we deemed relevant for relation extraction.
7
Conclusion and Future Work
We have presented a new relation extraction method based on a generalization of subsequence kernels. When evaluated on a protein interaction dataset, the new method showed
better performance than two previous rule-based systems. After a small modification, the
same kernel was evaluated on the task of extracting top-level relations from the ACE corpus, showing better performance when compared with a recent dependency tree kernel.
An experiment that we expect to lead to better performance was already suggested in Section 5.2 ? using the relation kernel in a cascaded fashion, in order to improve the low recall
caused by the highly unbalanced data distribution. Another performance gain may come
from setting the factor ? to a more appropriate value based on a development dataset.
Currently, the method assumes the named entities are known. A natural extension is to
integrate named entity recognition with relation extraction. Recent research [11] indicates
that a global model that captures the mutual influences between the two tasks can lead to
significant improvements in accuracy.
8
Acknowledgements
This work was supported by grants IIS-0117308 and IIS-0325116 from the NSF. We would
like to thank Rohit J. Kate and the anonymous reviewers for helpful observations.
References
[1] R. Grishman, Message Understanding Conference 6, http://cs.nyu.edu/cs/faculty/grishman/
muc6.html (1995).
[2] NIST, ACE ? Automatic Content Extraction, http://www.nist.gov/speech/tests/ace (2000).
[3] C. Blaschke, A. Valencia, Can bibliographic pointers for known biological data be found automatically? protein interactions as a case study, Comparative and Functional Genomics 2 (2001)
196?206.
[4] C. Blaschke, A. Valencia, The frame-based module of the Suiseki information extraction system, IEEE Intelligent Systems 17 (2002) 14?20.
[5] S. Ray, M. Craven, Representing sentence structure in hidden Markov models for information
extraction, in: Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence (IJCAI-2001), Seattle, WA, 2001, pp. 1273?1279.
[6] R. Bunescu, R. Ge, R. J. Kate, E. M. Marcotte, R. J. Mooney, A. K. Ramani, Y. W. Wong,
Comparative experiments on learning information extractors for proteins and their interactions,
Artificial Intelligence in Medicine (special issue on Summarization and Information Extraction
from Medical Documents) 33 (2) (2005) 139?155.
[7] H. Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, C. Watkins, Text classification using
string kernels, Journal of Machine Learning Research 2 (2002) 419?444.
[8] V. N. Vapnik, Statistical Learning Theory, John Wiley & Sons, 1998.
[9] A. Culotta, J. Sorensen, Dependency tree kernels for relation extraction, in: Proceedings of the
42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), Barcelona,
Spain, 2004, pp. 423?429.
[10] D. Zelenko, C. Aone, A. Richardella, Kernel methods for relation extraction, Journal of Machine Learning Research 3 (2003) 1083?1106.
[11] D. Roth, W. Yih, A linear programming formulation for global inference in natural language
tasks, in: Proceedings of the Annual Conference on Computational Natural Language Learning
(CoNLL), Boston, MA, 2004, pp. 1?8.
| 2787 |@word faculty:1 version:5 nd:1 lodhi:1 mention:3 yih:1 contains:4 exclusively:1 bibliographic:1 document:5 outperforms:3 current:1 activation:1 si:1 must:1 parsing:3 john:1 written:1 j1:2 v:2 greedy:1 intelligence:2 beginning:1 pointer:1 contribute:2 location:1 five:2 retrieving:1 shorthand:1 consists:5 ray:1 tagging:2 behavior:1 multi:2 automatically:4 gov:1 spain:1 notation:1 erk:13 string:3 developed:4 assert:1 every:1 exactly:2 classifier:1 uk:1 unit:1 grant:1 medical:1 hp2:7 before:3 positive:3 bind:1 pumping:1 establishing:1 plus:1 acl:1 studied:1 limited:2 seventeenth:1 practical:1 testing:2 recursive:1 lost:1 razvan:2 precisionrecall:1 procedure:2 significantly:1 matching:2 alleviated:1 word:46 confidence:1 protein:19 get:1 convenience:1 selection:2 influence:1 wong:1 www:2 equivalent:1 troop:1 reviewer:1 lexical:2 roth:1 identifying:2 rule:15 utilizing:1 enabled:1 classic:1 traditionally:1 updated:2 programming:2 us:3 recognition:1 observed:1 disjunctive:1 csie:1 role:1 module:1 capture:3 culotta:1 news:1 substantial:1 ideally:1 cristianini:1 dynamic:1 trained:5 rewrite:1 segment:7 coreference:1 localization:1 completely:2 po:14 easily:1 joint:1 k0:1 tx:2 separated:1 instantiated:1 bmax:4 describe:1 artificial:2 outside:1 saunders:1 disjunction:1 whose:1 ace:11 posed:1 larger:1 say:1 otherwise:1 libsvm2:1 syntactic:2 emergence:1 advantage:2 sequence:12 net:1 interaction:23 product:3 relevant:2 subcellular:1 gold:1 intervening:1 asserts:2 sourceforge:1 seattle:1 ijcai:1 comparative:4 spent:1 wider:1 measured:1 sa:3 auxiliary:2 c:4 involves:1 indicate:2 come:1 bak:5 annotated:1 modifying:2 human:1 enable:1 generalization:5 ntu:1 anonymous:1 biological:4 enumerated:1 extension:1 around:1 ocation:3 claim:1 tjk:2 smallest:1 serbian:1 narrative:4 bag:3 label:1 currently:3 utexas:2 aone:1 tf:2 weighted:1 tr6:2 aim:1 modified:1 avoid:1 varying:2 conjunction:1 categorizing:1 validated:1 longest:1 gappy:1 improvement:1 indicates:1 sense:1 helpful:1 inference:1 sb:10 typically:1 hidden:1 relation:57 going:1 i1:4 semantics:2 issue:1 among:2 dual:1 classification:2 html:1 exponent:1 development:1 constrained:4 noun:6 special:1 mutual:1 tokenization:1 extraction:28 having:2 manually:3 future:1 report:2 intelligent:1 inherent:1 few:1 simultaneously:2 recognize:2 gpo:1 detection:1 organization:1 message:1 highly:1 evaluation:1 mixture:1 bracket:1 activated:1 bki:5 sorensen:1 respective:1 unless:1 tree:12 taylor:1 penalizes:1 desired:1 ablated:1 mk:4 increased:1 instance:2 contiguous:1 subset:1 reported:2 dependency:12 kn:14 combined:1 chunk:8 confident:1 muc:1 international:1 ie:4 retain:1 together:1 again:1 medline:1 ear:1 containing:4 creating:1 account:1 summarized:1 kate:2 explicitly:1 caused:1 performed:1 try:1 doing:1 start:1 decaying:1 annotation:1 construed:1 square:1 formed:1 accuracy:3 efficiently:1 identify:1 none:1 fore:2 mooney:3 manual:3 definition:3 ty:2 pp:3 obvious:1 associated:1 unsolved:1 gain:1 dataset:4 recall:8 dimensionality:1 ramani:1 segmentation:1 hilbert:2 sophisticated:1 reflecting:1 originally:1 ta:4 supervised:1 higher:1 formulation:1 done:1 evaluated:3 though:1 generality:1 biomedical:4 parse:2 lack:1 name:8 contain:1 y2:5 facility:1 tagged:1 semantic:2 illustrated:2 adjacent:2 during:1 criterion:1 generalized:5 demonstrate:1 performs:1 passive:1 meaning:1 wise:1 recently:1 common:12 functional:1 association:1 refer:1 significant:3 automatic:1 unconstrained:1 similarly:1 language:7 shawe:1 dot:2 longer:1 etc:1 recent:4 showed:1 zelenko:1 scenario:4 compound:2 binary:2 meeting:1 seen:1 minimum:1 employed:1 prune:1 eo:1 ii:2 compounded:1 match:3 faster:1 cross:2 plugging:1 parenthesis:1 impact:1 kernel:62 tailored:1 background:1 want:1 separately:1 allocated:2 valencia:2 marcotte:1 extracting:14 counting:4 split:1 restrict:2 texas:2 motivated:1 url:2 effort:1 speech:2 ignored:1 useful:1 generally:1 clear:1 aimed:3 amount:1 clutter:1 bunescu:2 svms:1 blaschke:5 http:4 percentage:3 nsf:1 disjoint:1 correctly:2 write:1 express:4 threshold:1 changing:1 k4:4 libsvm:2 thresholded:1 utilize:1 graph:3 sum:2 package:2 fourth:1 opaque:1 named:4 reasonable:1 decision:1 acceptable:1 conll:1 modifier:3 ki:7 fold:2 annual:2 constraint:3 x2:10 tag:11 speed:1 argument:1 span:1 performing:1 department:2 combination:1 craven:1 slightly:1 son:1 tw:1 shallow:1 biologically:1 modification:2 s1:5 happens:1 restricted:1 pr:3 chunking:4 resource:1 equation:6 previously:1 count:4 cjlin:1 ge:1 ascending:1 end:3 available:1 generic:1 appropriate:1 appending:1 existence:1 top:8 assumes:1 include:2 linguistics:1 calculating:1 medicine:1 exploit:1 especially:1 added:2 already:2 fa:2 traditional:1 interacts:1 september:1 win:1 reversed:1 separate:2 thank:1 separating:1 entity:36 participate:1 assuming:1 length:11 index:5 relationship:14 providing:1 equivalently:1 difficult:1 statement:1 negative:2 ba:2 reliably:1 summarization:1 perform:1 shallower:1 allowing:1 observation:3 markov:1 finite:1 nist:2 head:8 frame:2 y1:5 station:4 verb:3 introduced:3 bk:9 sentence:28 learned:3 barcelona:1 hp1:7 suggested:1 pattern:25 protester:2 sparsity:1 challenge:2 tb:5 adjective:1 reliable:2 natural:8 difficulty:1 cascaded:2 representing:1 improve:1 superseded:1 created:2 deemed:1 extract:1 kj:2 raymond:1 text:4 genomics:1 understanding:1 acknowledgement:1 contributing:1 relative:1 rohit:1 expect:2 limitation:1 validation:1 integrate:1 degree:1 article:1 sik:3 port:1 share:1 austin:4 penalized:1 token:3 supported:1 last:1 soon:1 infeasible:1 synset:2 correspondingly:1 sparse:9 curve:2 calculated:1 vocabulary:1 dimension:2 fb:3 asserting:1 author:2 qn:1 computes:1 preprocessing:1 replicated:3 counted:1 newspaper:2 sj:1 global:2 overfitting:1 active:1 corpus:15 subsequence:33 anchored:7 table:2 robust:1 interact:1 complex:1 domain:3 did:1 s2:3 n2:2 allowed:2 fair:1 grishman:2 x1:10 fashion:2 wiley:1 precision:6 sub:3 position:5 sf:3 watkins:1 extractor:3 learns:1 down:1 formula:2 rk:3 specific:1 showing:1 list:1 decay:1 svm:4 nyu:1 essential:1 consist:1 lexicalized:2 vapnik:1 mirror:1 sx:6 gap:3 boston:1 simply:3 expressed:1 contained:1 ligand:2 corresponds:2 satisfies:1 extracted:4 ma:1 consequently:4 content:2 specifically:1 except:1 hyperplane:1 wordnet:2 total:2 discriminate:1 experimental:4 indicating:1 formally:1 exception:1 people:1 support:2 unbalanced:1 evaluate:2 erson:3 |
1,967 | 2,788 | Hot Coupling: A Particle Approach to Inference
and Normalization on Pairwise Undirected
Graphs of Arbitrary Topology
Firas Hamze
Nando de Freitas
Department of Computer Science
University of British Columbia
Abstract
This paper presents a new sampling algorithm for approximating functions of variables representable as undirected graphical models of arbitrary connectivity with pairwise potentials, as well as for estimating the
notoriously difficult partition function of the graph. The algorithm fits
into the framework of sequential Monte Carlo methods rather than the
more widely used MCMC, and relies on constructing a sequence of intermediate distributions which get closer to the desired one. While the
idea of using ?tempered? proposals is known, we construct a novel sequence of target distributions where, rather than dropping a global temperature parameter, we sequentially couple individual pairs of variables
that are, initially, sampled exactly from a spanning tree of the variables.
We present experimental results on inference and estimation of the partition function for sparse and densely-connected graphs.
1 Introduction
Undirected graphical models are powerful statistical tools having a wide range of applications in diverse fields such as image analysis [1, 2], conditional random fields [3], neural
models [4] and epidemiology [5]. Typically, when doing inference, one is interested in
obtaining the local beliefs, that is the marginal probabilities of the variables given the evidence set. The methods used to approximate these intractable quantities generally fall into
the categories of Markov Chain Monte Carlo (MCMC) [6] and variational methods [7].
The former, involving running a Markov chain whose invariant distribution is the distribution of interest, can suffer from slow convergence to stationarity and high correlation
between samples at stationarity, while the latter is not guaranteed to give the right answer
or always converge. When performing learning in such models however, a more serious
problem arises: the parameter update equations involve the normalization constant of the
joint model at the current value of parameters, from here on called the partition function.
MCMC offers no obvious way of approximating this wildly intractable sum [5, 8]. Although there exists a polynomial time MCMC algorithm for simple graphs with binary
nodes, ferromagnetic potentials and uniform observations [9], this algorithm is hardly applicable to the complex models encountered in practice. Of more interest, perhaps, are
the theoretical results that show that Gibbs sampling and even Swendsen-Wang[10] can
mix exponentially slowly in many situations [11]. This paper introduces a new sequential
Monte Carlo method for approximating expectations of a pairwise graph?s variables (of
which beliefs are a special case) and of reasonably estimating the partition function. Intuitively, the new method uses interacting parallel chains to handle multimodal distributions,
xi
xj
y
?(xi ,x j )
? (xj ,y)
Figure 1: A small example of the type of graphical model treated in this paper. The observations
correspond to the two shaded nodes.
with communicating chains distributed across the modes. In addition, there is no requirement that the chains converge to equilibrium as the bias due to incomplete convergence is
corrected for by importance sampling.
Formally, given hidden variables x and observations y, the model is specified on a graph
G(V, E), with edges E and M nodes V by:
Y
1 Y
?(xi , xj )
?(xi , yi )
?(x, y) =
Z
i?V
(i,j)?E
where x = {x1 , . . . , xM }, Z is the partition function, ?(?) denotes the observation potentials and ?(?) denotes the pair-wise interaction potentials, which
are Q
strictly positive
The partition function is: Z =
P
Q but otherwise arbitrary.
x
i?V ?(xi , yi )
(i,j)?E ?(xi , xj ),where the sum is over all possible system states.
We make no assumption about the graph?s topology or sparseness, an example is in Figure 1. We present experimental results on both fully-connected graphs (cases where each
node neighbors every other node) and sparse graphs.
Our approach belongs to the framework of Sequential Monte Carlo (SMC), which has
its roots in the seminal paper of [12]. Particle filters are a well-known instance of SMC
methods [13]. They apply naturally to dynamic systems like tracking. Our situation is
different. We introduce artificial dynamics simply as a constructive strategy for obtaining
samples of a sequence of distributions converging to the distribution of interest. That is,
initially we sample from and easy-to-sample distribution. This distribution is then used as
a proposal mechanism to obtain samples from a slightly more complex distribution that is
closer to the target distribution. The process is repeated until the sequence of distributions
of increasing complexity reaches the target distribution. Our algorithm has connections
to a general annealing strategy proposed in the physics [14] and statistics [15] literature,
known as Annealed Importance Sampling (AIS). AIS is a special case of the general SMC
framework [16]. The term annealing refers to the lowering of a ?temperature parameter,?
the process of which makes the joint distribution more concentrated on its modes, whose
number can be massive for difficult problems. The celebrated simulated annealing (SA)
[17] algorithm is an optimization method relying on this phenomenon; presently, however
we are interested in integration and so SA does not apply here.
Our approach does not use a global temperature, but sequentially introduces dependencies
among the variables; graphically, this can be understood as ?adding edges? to the graph.
In this paper, we restrict ourselves to discrete state-spaces although the method applies to
arbitrary continuous distributions.
For our initial distribution we choose a spanning tree of the variables, on which analytic
marginalization, exact sampling, and computation of the partition function are easily done.
After drawing a population of samples (particles) from this distribution, the sequential
phase begins: an edge of the desired graph is chosen and gradually added to the current one
as shown in Figure 2. The particles then follow a trajectory according to some proposal
mechanism. The ?fitness? of the particles is measured via their importance weights. When
the set of samples has become skewed, that is with some containing high weights and
many containing low ones, the particles are resampled according to their weights. The
sequential structure is thus imposed by the propose-and-resample mechanism rather than by
any property of the original system. The algorithm is formally described after an overview
of SMC and recent work presenting a unifying framework of the SMC methodology outside
the context of Bayesian dynamic filtering[16].
Figure 2: A graphical illustration of our algorithm. First we construct a spanning tree, of which a
population of iid samples can be easily drawn using the forward filtering/backward sampling algorithm for trees. The tree then becomes the proposal mechanism for generating samples for a graph
with an extra potential. The process is repeated until we obtain samples from the target distribution
(defined on a fully connected graph in this case). Edges can be added ?slowly? using a coupling
parameter.
2 Sequential Monte Carlo
As shown in Figure 2, we consider a sequence of auxiliary distributions
?
e1 (x1 ), ?
e2 (x1:2 ), . . . , ?
en (x1:n ), where ?
e1 (x1 ) is the distribution on the weighted
spanning tree. The sequence of distributions can be constructed so that it satisfies
?
en (x1:n ) = ?n (xn )e
?n (x1:n?1 |x1:n ). Marginalizing over x1:n?1 gives us the target
distribution of interest ?n (xn ) (the distribution of the graphical model that we want to
sample from as illustrated in Figure 2 for n = 4). So we first focus on sampling from
the sequence of auxiliary distributions. The joint distribution isR only known up to a
normalization constant: ?
en (x1:n ) = Zn?1 fn (x1:n ), where Zn , fn (x1:n )dx1:n is the
partition function. We are often interested
in computing this partition function and other
R
expectations, such as I(g(xn )) = g(xn )?n (xn )dxn , where g is a function of interest
(e.g. g(x) = x if we are interested in computing the mean of x).
(i)
If we had a set of samples {x1:n }N
e, we could approximate this integral with the
i=1 from ?
P
b
following Monte Carlo estimator: ?
e n (dx1:n ) = N1 N
(dx1:n ), where ?x(i) (dx1:n )
i=1 ?x(i)
1:n
1:n
denotes the delta Dirac function, and consequently approximate any expectations of interest. These estimates converge almost surely to the true expectation as N goes to infinity. It
is typically hard to sample from ?
e directly. Instead, we sample from a proposal distribution
q and weight the samples according to the following importance ratio
fn (x1:n )
fn (x1:n ) qn?1 (x1:n?1 )
wn =
=
wn?1
qn (x1:n )
qn (x1:n ) fn?1 (x1:n?1 )
The proposal is constructed sequentially: q(x1:n ) = qn?1 (x1:n?1 )qn (xn |x1:n?1 ). Hence,
the importance weights can be updated recursively
fn (x1:n )
wn =
wn?1
(1)
qn (xn |x1:n?1 )fn?1 (x1:n?1 )
(i)
(i)
Given a set of N particles x1:n?1 , we obtain a set of particles xn by sampling from
(i)
qn (xn |x1:n?1 ) and applying the weights of equation (1). To overcome slow drift in the
particle population, a resampling (selection) step chooses the fittest particles (see the introductory chapter in [13] for a more detailed explanation). We use a state-of-the-art minimum
variance resampling algorithm [18].
The ratio of successive partition functions can be easily estimated using this algorithm as
follows:
R
Z
N
X
fn (x1:n )dx1:n
Zn
(i)
en?1 ,
=
= w
bn ?
en?1 (x1:n?1 )qn (xn |x1:n?1 )dx1:n ?
w
bn(i) w
Zn?1
Zn?1
i=1
P (j)
(i)
(i)
fn (x1:n )
where w
en?1 = wn?1 / j wn?1 , w
bn = qn (xn |x1:n?1
)fn?1 (x1:n?1 ) and Z1 can be easily
computed as it is the partition function for a tree.
We can choose a (non-homogeneous) Markov chain with transition kernel K n (xn?1 , xn )
as the proposal distribution qn (xn |x1:n?1 ).
Hence, given an initial proposal
distribution
Qn q1 (?), we have joint proposal distribution at step n: qn (x1:n ) =
q1 (x1 ) k=2 Kk (xk?1 , xk ). It is convenient to assume that the artificial distribution
?
e (x
|x ) is also the product of (backward) Markov kernels: ?
en (x1:n?1 |xn ) =
Qnn?11:n?1 n
L
(x
,
x
)
[16].
Under
these
choices,
the
(unnormalized)
incremental
impork+1
k
k=1 k
tance weight becomes:
fn (xn )Ln?1 (xn , xn?1 )
wn ?
(2)
fn?1 (xn?1 )Kn (xn?1 , xn )
Different choices of the backward Kernel L result in different algorithms [16]. For example,
n (xn?1 ,xn )
results in the AIS algorithm, with
the choice: Ln?1 (xn , xn?1 ) = fn (xn?1f)K
n (xn )
fn (xn?1 )
weights wn ? fn?1
(xn?1 ) . However, we should point out that this method is more general
as one can carry out resampling. Note that in this case, the importance weights do not
depend on xn and, hence, it is possible to do resampling before the importance sampling
step. This often leads to huge reduction in estimation error [19]. Also, note that if there
are big discrepancies between fn (?) and fn?1 (?) the method might perform poorly. To
overcome this, [16] use variance results to propose a different choice of backward kernel,
which results in the following incremental importance weights:
wn ? R
fn (xn )
fn?1 (xn?1 )Kn (xn?1 , xn )dxn?1
(3)
The integral in the denominator can be evaluated when dealing with Gaussian or reasonable
discrete networks.
3 The new algorithm
We could try to perform traditional importance sampling by seeking some proposal distribution for the entire graph. This is very difficult and performance degrades exponentially
in dimension if the proposal is mismatched [20]. We propose, however, to use the samples
from the tree distribution (which we call ?0 ) as candidates to an intermediate target distribution, consisting of the tree along with a ?weak? version of a potential corresponding
to some edge of the original graph. Given a set of edges G0 which form a spanning tree
of the target graph, we can can use the belief propagation equations [21] and bottom-up
propagation, top-down sampling [22], to draw a set of N independent samples from the
tree. Computation of the normalization constant Z1 is also straightforward and efficient in
the case of trees using a sum-product recursion. From then on, however, the normalization
constants of subsequent target distributions cannot be analytically computed.
We then choose a new edge e1 from the set of ?unused? edges E ? G0 and add it to G0
to form the new edge set G1 = e1 ? G0 . Let the vertices of e1 be u1 and v1 . Then,
the intermediate target distribution ?1 is proportional to ?0 (x1 )?e1 (xu1 , xv1 ). In doing
straightforward importance sampling, using ?0 as a proposal for ?1 , the importance weight
is proportional to ?e1 (xu1 , xv1 ). We adopt a slow proposal process to move the population
of particles towards ?1 . We gradually introduce the potential between Xu1 and Xv1 via
a coupling parameter ? which increases from 0 to 1 in order to ?softly? bring the edge?s
potential in and allow the particles to adjust to the new environment. Formally, when
adding edge e1 to the graph, we introduce a number of coupling steps so that we have the
intermediate target distribution:
?0 (x0 ) [?e1 (xu1 , xv1 )]
?n
where ?n is defined to be 0 when a new edge enters the sequence, increases to 1 as the
edge is brought in, and drops back to zero when another edge is added at the following
edge iteration.
At each time step, we want a proposal mechanism that is close to the target distribution.
Proposals based on simple perturbations, such as random walks, are easy to implement, but
can be inefficient. Metropolis-Hastings proposals are not possible because of the integral
in the rejection term. We can, however, employ a single-site Gibbs sampler with random
scan whose invariant distribution at each step is the the next target density in the sequence;
this kernel is applied to each particle. When an edge has been fully added a new one is
chosen and the process is repeated until the final target density is the full graph. We use an
analytic expression for the incremental weights corresponding to Equation (3).
To alleviate potential confusion with MCMC, while any one particle obviously forms a
correlated path, we are using a population and are making no assumption or requirement
that the chains have converged as is done in MCMC as we are correcting for incomplete
convergence with the weights.
4 Experiments and discussion
Four approximate inference methods were compared: our SMC method with sequential
edge addition (Hot Coupling (HC)), a more typical annealing strategy with a global temperature parameter(SMCG), single-site Gibbs sampling with random scan and loopy belief
propagation. SMCG can be thought of as related to HC but where all the edges and local
evidence are annealed at the same time.
The majority of our experiments were performed on graphs that were small enough for
exact marginals and partition functions to be exhaustively calculated. However, even in toy
cases MCMC and loopy can give unsatisfactory and sometimes disastrous results. We also
ran a set of experiments on a relatively large MRF.
For the small examples we examined both fully-connected (FC) and square grid (MRF)
networks, with 18 and 16 nodes respectively. Each variable could assume one of 3 states.
Our pairwise potentials corresponded to the well-known Potts model: ? i,j (xi , xj ) =
1
1
e T Jij ?xi ,xj , ?i (xi ) = e T J?xi (yi ) . We set T = 0.5 (a low temperature) and tested models
with uniform and positive Jij , widely used in image analysis, and models with Jij drawn
from a standard Gaussian; the latter is an instance of the much-studied spin-glass models
of statistical physics which are known to be notoriously difficult to simulate at low temperatures [23]. Of course fully-connected models are known as Boltzmann machines [4] to the
neural computation community. The output potentials were randomly selected in both the
uniform and random interaction cases. The HC method used a linear coupling schedule for
each edge, increasing from ? = 0 to ? = 1 over 100 iterations; our SMCG implementation
used a linear global cooling schedule, whose number of steps depended on the graph in
order to match those taken by SMCG.
All Monte Carlo algorithms were independently run 50 times each to approximate the variance of the estimates. Our SMC simulations used 1000 particles for each run, while each
Gibbs run performed 20000 single-site updates. For these models, this was more than
enough steps to settle into local minima; runs of up to 1 million iterations did not yield a
difference, which is characteristic of the exponential mixing time of the sampler on these
graphs. For our HC method, spanning trees and edges in the sequential construction were
randomly chosen from the full graph; the rationale for doing so is to allay any criticism that
?tweaking? the ordering may have had a crucial effect on the algorithm. The order clearly
would matter to some extent, but this will be examined in later work. Also in the tables
by ?error? we mean the quantity |?a?a|
where a
? is an estimate of some quantity a obtained
a
exactly (say Z).
First, we used HC, SMCG and Gibbs to approximate the expected sum of our graphs? variP
ables, the so-called magnetization: m = E[ M
i=1 xi ]. We then approximated the partition
functions of the graphs using HC, SMCG, and loopy.1We note again that there is no obvious way of estimating Z using Gibbs. Finally, we approximated the marginal probabilities
using the four approximate methods. For loopy, we only kept the runs where it converged.
1
Code for Bethe Z approximation kindly provided by Kevin Murphy.
Method
HC
SMCG
Gibbs
MRF Random ?
Error
Var
0.0022
0.012
0.0001
0.03
0.0003
0.014
MRF Homogeneous ?
Error
Var
0.0251
0.17
0.2789
10.09
0.4928
200.95
FC Random ?
Error
Var
0.0016 0.0522
0.127
0.570
0.02
0.32
FC Homogeneous ?
Error
Var
0.0036
0.038
0.331
165.61
0.3152
201.08
Figure 3: Approximate magnetization for the nodes of the graphs, as defined in the text, calculated
using HC, SMCG, and Gibbs sampling and compared to the true value obtained by brute force.
Observe the massive variance of Gibbs sampling in some cases.
Method
HC
SMCG
loopy
MRF Random ?
Error
Var
0.0105
0.002
0.004
0.005
0.005
-
MRF Homogeneous ?
Error
Var
0.0227
0.001
6.47
7.646
0.155
-
FC Random ?
Error
Var
0.0043 0.0537
1800
1.24
1
-
FC Homogeneous ?
Err
Var
0.0394
0.001
1
29.99
0.075
-
Figure 4: Approximate partition function of the graphs discussed in the text calculated using HC,
SMCG, and Loopy Belief Propagation (loopy.) For HC and SMCG are shown the error of the sample
average of results over 50 independent runs and the variance across those runs. loopy is of course a
deterministic algorithm and has no variance. HC maintains a low error and variance in all cases.
Figure 3 shows the results of the magnetization experiments. On the MRF with random
interactions, all three methods gave very accurate answers with small variance, but for the
other graphs, the accuracies and variances began to diverge. On both positive-potential
graphs, Gibbs sampling gives high error and huge variance; SMCG gives lower variance
but is still quite skewed. On the fully-connected random-potential graph the 3 methods give
good results but HC has the lowest variance. Our method experiences its worst performance
on the homogeneous MRF but it is only 2.5% error!
Figure 4 tabulates the approximate partition function calculations. Again, for the MRF with
random interactions, the 3 methods give estimates of Z of comparable quality. This example appeared to work for loopy, Gibbs, and SMCG. For the homogeneous MRF, SMCG
degrades rapidly; loopy is still satisfactory at 15% error, but HC is at 2.7% with very
low variance. In the fully-connected case with random potentials, HC?s error is 0.43%
while loopy?s error is very high, having underestimated Z by a factor of 10 5 . SMCG fails
completely here as well. On the uniform fully-connected graph, loopy actually gives a
reasonable estimate of Z at 7.5%, but is still beaten by HC.
Figure 5 shows the variational (L1 ) distance between the exact marginal for a randomly
chosen node in each graph and the approximate marginals of the 4 algorithms, a common
measure of the ?distance? between 2 distributions. For the Monte Carlo methods (HC,
SMCG and Gibbs) the average over 50 independent runs was used to approximate the
expected L1 error of the estimate. All 4 methods perform well on the random ? MRF.
On the MRF with homogeneous ?, both loopy and SMCG degrade, but HC maintains
a low error. Among the FC graphs, HC performs extremely well on the homogeneous
? and surprisingly loopy does well too. In the random ? case, loopy?s error increases
dramatically.
Our final set of simulations was the classic Mean Squared reconstruction of a noisy image problem; we used a 100x100 MRF with a noisy ?patch? image (consisting of shaded,
rectangular regions) with an isotropic 5-state prior model. The object was to calculate the
pixels? posterior marginal expectations. We chose this problem because it is a large model
on which loopy is known to do well on, and can hence provide us with a measure of quality of the HC and SMCG results as larger numbers of edges are involved. From the toy
examples we infer that the mechanism of HC is quite different from that of loopy as we
have seen that it can work when loopy does not. Hence good performance on this problem
would suggest that HC would scale well, which is a crucial question as in the large graph
the final distribution has many more edges than the initial spanning tree. The results were
promising: the mean-squared reconstruction error using loopy and using HC were virtually
identical at 9.067 ? 10?5 and 9.036 ? 10?5 respectively, showing that HC seemed to be
Fully?Connected Random
Loopy
1
SMCG
0.5
0
Gibbs
HC
Variational distance
0.5
0
1
0.5
1.5
1
Fully?Connected Homogeneous
SMCG Gibbs
0
Grid Model Random
1.5
Variational distance
1.5
Variational distance
Variational distance
1.5
Loopy
HC
Grid Model Homogeneous
Gibbs
1
SMCG
0.5
HC SMCG Gibbs Loopy
0
Loopy
HC
60
60
50
50
40
40
Sample Average
Sample Average
Figure 5: Variational(L1 ) distance between estimated and true marginals for a randomly chosen
node in each of the 4 graphs using the four approximate methods (smaller values mean less error.)
The MRF-random example was again ?easy? for all the methods, but the rest raise problems for all
but HC.
30
30
20
20
10
10
0
0
100
200
300
Iteration
400
500
600
0
0
2
4
Iteration
6
8
10
5
x 10
Figure 6: An example of how MCMC can get ?stuck:? 3 different runs of a Gibbs sampler estimating
the magnetization of FC-Homogeneous graph. At left are shown the first 600 iterations of the runs;
after a brief transient behaviour the samplers settled into different minima which persisted for the
entire duration (20000 steps) of the runs. Indeed for 1 million steps the local minima persist, as
shown at right.
robust to the addition of around 9000 edges and many resampling stages. SMCG on the
large MRF did not fare as well.
It is crucial to realize that MCMC is completely unsuited to some problems; see for example the ?convergence? plots of the estimated magnetization of 3 independent Gibbs sampler
runs on one of our ?toy? graphs shown in Figure 6. Such behavior has been studied by
Gore and Jerrum [11] and others, who discuss pessimistic theoretical results on the mixing
properties of both Gibbs sampling and the celebrated Swendsen-Wang algorithm in several cases. To obtain a good estimate, MCMC requires that the process ?visit? each of the
target distribution?s basins of energy with a frequency representative of their probability.
Unfortunately, some basins take an exponential amount of time to exit, and so different finite runs of MCMC will give quite different answers, leading to tremendous variance. The
methodology presented here is an attempt to sidestep the whole issue of mixing by permitting the independent particles to be stuck in modes, but then considering them jointly when
estimating. In other words, instead of using a time average, we estimate using a weighted
ensemble average. The object of the sequential phase is to address the difficult problem
of constructing a suitable proposal for high-dimensional problems; to this the resamplingbased methodology of particle filters was thought to be particularly suited. For the graphs
we have considered, the single-edge algorithm we propose seems to be preferable to global
annealing.
References
[1] S Z Li. Markov random field modeling in image analysis. Springer-Verlag, 2001.
[2] P Carbonetto and N de Freitas. Why can?t Jos?e read? the problem of learning semantic associations in a robot environment. In Human Language Technology Conference Workshop on
Learning Word Meaning from Non-Linguistic Data, 2003.
[3] J D Lafferty, A McCallum, and F C N Pereira. Conditional random fields: Probabilistic models
for segmenting and labeling sequence data. In International Conference on Machine Learning,
2001.
[4] D E Rumelhart, G E Hinton, and R J Williams. Learning internal representations by error
propagation. In D E Rumelhart and J L McClelland, editors, Parallel Distributed Processing:
Explorations in the Microstructure of Cognition, pages 318?362, Cambridge, MA, 1986.
[5] P J Green and S Richardson. Hidden Markov models and disease mapping. Journal of the
American Statistical Association, 97(460):1055?1070, 2002.
[6] C P Robert and G Casella. Monte Carlo Statistical Methods. Springer-Verlag, New York, 1999.
[7] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational
methods for graphical models. Machine Learning, 37:183?233, 1999.
[8] J Moller, A N Pettitt, K K Berthelsen, and R W Reeves. An efficient Markov chain Monte Carlo
method for distributions with intractable normalising constants. Technical report, The Danish
National Research Foundation: Network in Mathematical Physics and Stochastics, 2004.
[9] M Jerrum and A Sinclair. The Markov chain Monte Carlo method: an approach to approximate
counting and integration. In D S Hochbaum, editor, Approximation Algorithms for NP-hard
Problems, pages 482?519. PWS Publishing, 1996.
[10] R H Swendsen and J S Wang. Nonuniversal critical dynamics in Monte Carlo simulations.
Physical Review Letters, 58(2):86?88, 1987.
[11] V Gore and M Jerrum. The swendsen-wang process does not always mix rapidly. In 29th
Annual ACM Symposium on Theory of Computing, 1996.
[12] N Metropolis and S Ulam. The Monte Carlo method. Journal of the American Statistical
Association, 44(247):335?341, 1949.
[13] A Doucet, N de Freitas, and N J Gordon, editors. Sequential Monte Carlo Methods in Practice.
Springer-Verlag, 2001.
[14] C Jarzynski. Nonequilibrium equality for free energy differences. Phys. Rev. Lett., 78, 1997.
[15] R M Neal. Annealed importance sampling. Technical Report No 9805, University of Toronto,
1998.
[16] P Del Moral, A Doucet, and G W Peters. Sequential Monte Carlo samplers. Technical Report
CUED/F-INFENG/2004, Cambridge University Engineering Department, 2004.
[17] S Kirkpatrick, C D Gelatt, and M P Vecchi. Optimization by simulated annealing. Science,
220:671?680, 1983.
[18] G Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models.
Journal of Computational and Graphical Statistics, 5:1?25, 1996.
[19] N de Freitas, R Dearden, F Hutter, R Morales-Menendez, J Mutch, and D Poole. Diagnosis by
a waiter and a mars explorer. IEEE Proceedings, 92, 2004.
[20] J A Bucklew. Large Deviation Techniques in Decision, Simulation, and Estimation. John Wiley
& Sons, 1986.
[21] J Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. MorganKaufmann, 1988.
[22] C K Carter and R Kohn. On Gibbs sampling for state space models. Biometrika, 81(3):541?553,
1994.
[23] M E J Newman and G T Barkema. Monte Carlo Methods in Statistical Physics. Oxford University Press, 1999.
| 2788 |@word version:1 polynomial:1 seems:1 simulation:4 bn:3 q1:2 recursively:1 xv1:4 carry:1 reduction:1 celebrated:2 initial:3 freitas:4 err:1 current:2 john:1 realize:1 fn:19 subsequent:1 partition:15 analytic:2 drop:1 plot:1 update:2 resampling:5 selected:1 menendez:1 xk:2 isotropic:1 mccallum:1 normalising:1 node:9 toronto:1 successive:1 mathematical:1 along:1 constructed:2 become:1 symposium:1 introductory:1 introduce:3 x0:1 pairwise:4 indeed:1 expected:2 behavior:1 relying:1 considering:1 increasing:2 becomes:2 begin:1 estimating:5 provided:1 lowest:1 every:1 gore:2 exactly:2 preferable:1 biometrika:1 brute:1 segmenting:1 positive:3 before:1 understood:1 local:4 engineering:1 depended:1 oxford:1 path:1 might:1 chose:1 studied:2 examined:2 shaded:2 smc:7 range:1 practice:2 implement:1 thought:2 convenient:1 word:2 refers:1 tweaking:1 suggest:1 get:2 cannot:1 close:1 selection:1 context:1 applying:1 seminal:1 imposed:1 deterministic:1 annealed:3 graphically:1 go:1 straightforward:2 independently:1 duration:1 rectangular:1 williams:1 correcting:1 communicating:1 estimator:1 population:5 handle:1 classic:1 updated:1 target:14 construction:1 massive:2 exact:3 homogeneous:12 us:1 rumelhart:2 approximated:2 particularly:1 cooling:1 persist:1 bottom:1 wang:4 enters:1 worst:1 calculate:1 region:1 ferromagnetic:1 connected:10 ordering:1 ran:1 disease:1 environment:2 complexity:1 dynamic:4 exhaustively:1 depend:1 raise:1 exit:1 completely:2 multimodal:1 joint:4 easily:4 chapter:1 x100:1 unsuited:1 monte:17 artificial:2 corresponded:1 labeling:1 kevin:1 outside:1 newman:1 whose:4 quite:3 widely:2 larger:1 plausible:1 say:1 drawing:1 otherwise:1 statistic:2 jerrum:3 g1:1 richardson:1 jointly:1 noisy:2 final:3 obviously:1 sequence:10 propose:4 reconstruction:2 interaction:4 product:2 jij:3 rapidly:2 mixing:3 poorly:1 fittest:1 dirac:1 convergence:4 requirement:2 ulam:1 generating:1 incremental:3 object:2 cued:1 coupling:6 measured:1 sa:2 auxiliary:2 filter:3 exploration:1 nando:1 human:1 transient:1 settle:1 behaviour:1 carbonetto:1 microstructure:1 alleviate:1 pessimistic:1 kitagawa:1 strictly:1 around:1 considered:1 swendsen:4 equilibrium:1 cognition:1 mapping:1 adopt:1 resample:1 estimation:3 applicable:1 tool:1 weighted:2 brought:1 clearly:1 always:2 gaussian:3 rather:3 jaakkola:1 linguistic:1 focus:1 unsatisfactory:1 potts:1 criticism:1 glass:1 inference:5 softly:1 typically:2 qnn:1 entire:2 initially:2 hidden:2 interested:4 pixel:1 issue:1 among:2 art:1 special:2 integration:2 marginal:4 field:4 construct:2 having:2 sampling:19 identical:1 discrepancy:1 others:1 report:3 np:1 serious:1 employ:1 gordon:1 intelligent:1 randomly:4 densely:1 national:1 individual:1 fitness:1 murphy:1 phase:2 ourselves:1 consisting:2 n1:1 attempt:1 stationarity:2 interest:6 huge:2 adjust:1 introduces:2 kirkpatrick:1 chain:9 accurate:1 edge:24 closer:2 integral:3 experience:1 tree:14 incomplete:2 walk:1 desired:2 theoretical:2 hutter:1 instance:2 modeling:1 zn:5 loopy:23 vertex:1 deviation:1 uniform:4 nonequilibrium:1 firas:1 too:1 dependency:1 answer:3 kn:2 chooses:1 density:2 epidemiology:1 international:1 probabilistic:2 physic:4 diverge:1 jos:1 connectivity:1 again:3 squared:2 settled:1 containing:2 choose:3 slowly:2 sinclair:1 american:2 inefficient:1 leading:1 sidestep:1 toy:3 li:1 potential:14 de:4 matter:1 xu1:4 performed:2 root:1 try:1 later:1 doing:3 maintains:2 parallel:2 square:1 spin:1 accuracy:1 variance:14 characteristic:1 who:1 ensemble:1 correspond:1 yield:1 weak:1 bayesian:1 iid:1 carlo:17 trajectory:1 notoriously:2 converged:2 reach:1 casella:1 phys:1 danish:1 energy:2 frequency:1 involved:1 obvious:2 e2:1 naturally:1 couple:1 sampled:1 schedule:2 actually:1 back:1 bucklew:1 follow:1 methodology:3 mutch:1 done:2 evaluated:1 mar:1 wildly:1 stage:1 correlation:1 until:3 hastings:1 nonlinear:1 propagation:5 del:1 mode:3 quality:2 perhaps:1 effect:1 true:3 former:1 hence:5 analytically:1 equality:1 read:1 satisfactory:1 semantic:1 illustrated:1 neal:1 skewed:2 unnormalized:1 presenting:1 confusion:1 magnetization:5 performs:1 l1:3 temperature:6 bring:1 berthelsen:1 reasoning:1 image:5 variational:8 wise:1 novel:1 meaning:1 began:1 common:1 physical:1 overview:1 exponentially:2 million:2 discussed:1 fare:1 association:3 marginals:3 cambridge:2 gibbs:20 ai:3 reef:1 grid:3 particle:17 language:1 had:2 robot:1 add:1 posterior:1 recent:1 belongs:1 verlag:3 binary:1 tempered:1 yi:3 seen:1 minimum:4 waiter:1 surely:1 converge:3 smoother:1 full:2 mix:2 infer:1 technical:3 match:1 calculation:1 offer:1 e1:9 visit:1 permitting:1 converging:1 involving:1 mrf:15 infeng:1 denominator:1 expectation:5 iteration:6 normalization:5 kernel:5 sometimes:1 hochbaum:1 proposal:17 addition:3 want:2 annealing:6 underestimated:1 crucial:3 extra:1 rest:1 virtually:1 undirected:3 lafferty:1 dxn:2 jordan:1 hamze:1 call:1 counting:1 unused:1 intermediate:4 easy:3 wn:9 enough:2 xj:6 fit:1 marginalization:1 gave:1 nonuniversal:1 topology:2 restrict:1 idea:1 expression:1 kohn:1 moral:1 suffer:1 peter:1 york:1 hardly:1 dramatically:1 generally:1 detailed:1 involve:1 amount:1 concentrated:1 category:1 mcclelland:1 carter:1 delta:1 estimated:3 diverse:1 diagnosis:1 discrete:2 dropping:1 four:3 drawn:2 kept:1 lowering:1 backward:4 v1:1 graph:37 sum:4 run:13 letter:1 powerful:1 almost:1 reasonable:2 patch:1 draw:1 decision:1 comparable:1 resampled:1 guaranteed:1 encountered:1 annual:1 infinity:1 u1:1 simulate:1 ables:1 extremely:1 vecchi:1 performing:1 relatively:1 department:2 according:3 jarzynski:1 representable:1 across:2 slightly:1 smaller:1 son:1 metropolis:2 stochastics:1 making:1 rev:1 presently:1 intuitively:1 invariant:2 gradually:2 taken:1 ln:2 equation:4 discus:1 mechanism:6 apply:2 observe:1 gelatt:1 original:2 denotes:3 running:1 top:1 publishing:1 graphical:7 unifying:1 tabulates:1 ghahramani:1 approximating:3 seeking:1 move:1 g0:4 added:4 quantity:3 question:1 strategy:3 degrades:2 traditional:1 distance:7 simulated:2 majority:1 degrade:1 extent:1 spanning:7 code:1 illustration:1 ratio:2 kk:1 difficult:5 unfortunately:1 disastrous:1 robert:1 implementation:1 boltzmann:1 perform:3 observation:4 markov:8 finite:1 situation:2 hinton:1 persisted:1 interacting:1 perturbation:1 arbitrary:4 community:1 drift:1 pair:2 specified:1 connection:1 z1:2 tremendous:1 pearl:1 address:1 poole:1 xm:1 appeared:1 green:1 tance:1 explanation:1 belief:5 dearden:1 hot:2 suitable:1 critical:1 treated:1 force:1 explorer:1 recursion:1 pettitt:1 technology:1 brief:1 barkema:1 columbia:1 text:2 prior:1 literature:1 review:1 marginalizing:1 fully:10 rationale:1 filtering:2 proportional:2 var:8 foundation:1 basin:2 editor:3 morale:1 course:2 surprisingly:1 free:1 bias:1 allow:1 mismatched:1 wide:1 fall:1 neighbor:1 saul:1 sparse:2 isr:1 distributed:2 overcome:2 dimension:1 xn:34 transition:1 calculated:3 lett:1 qn:12 seemed:1 forward:1 stuck:2 approximate:14 dealing:1 global:5 sequentially:3 doucet:2 xi:11 continuous:1 why:1 table:1 promising:1 bethe:1 reasonably:1 robust:1 obtaining:2 hc:29 complex:2 moller:1 constructing:2 did:2 kindly:1 big:1 whole:1 repeated:3 x1:38 site:3 representative:1 en:7 slow:3 wiley:1 fails:1 pereira:1 exponential:2 candidate:1 british:1 down:1 pws:1 showing:1 dx1:6 beaten:1 evidence:2 intractable:3 exists:1 workshop:1 sequential:11 adding:2 importance:12 sparseness:1 rejection:1 suited:1 fc:7 simply:1 tracking:1 applies:1 springer:3 satisfies:1 relies:1 acm:1 ma:1 conditional:2 consequently:1 towards:1 hard:2 typical:1 corrected:1 sampler:6 called:2 experimental:2 formally:3 internal:1 latter:2 arises:1 scan:2 morgankaufmann:1 constructive:1 mcmc:11 tested:1 phenomenon:1 correlated:1 |
1,968 | 2,789 | Large scale networks fingerprinting and
visualization using the k-core decomposition
J. Ignacio Alvarez-Hamelin?
LPT (UMR du CNRS 8627),
Universit?e de Paris-Sud,
91405 ORSAY Cedex France
[email protected]
Luca Dall?Asta
LPT (UMR du CNRS 8627),
Universit?e de Paris-Sud,
91405 ORSAY Cedex France
[email protected]
Alain Barrat
LPT (UMR du CNRS 8627),
Universit?e de Paris-Sud,
91405 ORSAY Cedex France
[email protected]
Alessandro Vespignani
School of Informatics,
Indiana University,
Bloomington, IN 47408, USA
[email protected]
Abstract
We use the k-core decomposition to develop algorithms for the analysis
of large scale complex networks. This decomposition, based on a recursive pruning of the least connected vertices, allows to disentangle the
hierarchical structure of networks by progressively focusing on their central cores. By using this strategy we develop a general visualization algorithm that can be used to compare the structural properties of various networks and highlight their hierarchical structure. The low computational
complexity of the algorithm, O(n + e), where n is the size of the network, and e is the number of edges, makes it suitable for the visualization
of very large sparse networks. We show how the proposed visualization
tool allows to find specific structural fingerprints of networks.
1
Introduction
In recent times, the possibility of accessing, handling and mining large-scale networks
datasets has revamped the interest in their investigation and theoretical characterization
along with the definition of new modeling frameworks. In particular, mapping projects of
the World Wide Web and the physical Internet offered the first chance to study topology
and traffic of large-scale networks. Other studies followed describing population networks
of practical interest in social science, critical infrastructures and epidemiology [1, 2, 3].
The study of large scale networks, however, faces us with an array of new challenges. The
definitions of centrality, hierarchies and structural organizations are hindered by the large
size of these networks and the complex interplay of connectivity patterns, traffic flows and
geographical, social and economical attributes characterizing their basic elements. In this
?
Further author information: J.I.A-H. is also with Facultad de Ingenier??a, Universidad de Buenos
Aires, Paseo Col?on 850, C 1063 ACV Buenos Aires, Argentina.
context, a large research effort is devoted to provide effective visualization and analysis
tools able to cope with graphs whose size may easily reach millions of vertices.
In this paper, we propose a visualization algorithm based on the k-core decomposition
able to uncover in a two-dimensional layout several topological and hierarchical properties
of large scale networks. The k-core decomposition [4] consists in identifying particular
subsets of the graph, called k-cores, each one obtained by recursively removing all the
vertices of degree smaller than k, until the degree of all remaining vertices is larger than or
equal to k. Larger values of the index k clearly correspond to vertices with larger degree
and more central position in the network?s structure.
This visualization tool allows the identification of real or computer-generated networks?
fingerprints, according to properties such as hierarchical arrangement, degree correlations
and centrality. The distinction between networks with seemingly similar properties is
achieved by inspecting the different layouts generated by the visualization algorithm. In
addition, the running time of the algorithm grows only linearly with the size of the network, granting the scalability needed for the visualization of very large sparse networks.
The proposed (publicly available [5]) algorithm appears therefore as a convenient method
for the general analysis of large scale complex networks and the study of their architecture.
The paper is organized as follows: after a brief survey on k-core studies (section 2), we
present the basic definitions and the graphical algorithms in section 3 along with the basic
features of the visualization layout. Section 4 shows how the visualizations obtained with
the present algorithm may be used for network fingerprinting, and presents two examples
of visualization of real networks.
2
Related work
While a large number of algorithms aimed at the visualization of large scale networks have
been developed (e.g., see [6]), only a few consider explicitly the k-core decomposition.
Vladimir Batagelj et al. [7] studied the k-core decomposition applied to visualization problems, introducing some graphical tools to analyse the cores, mainly based on the visualization of the adjacency matrix of certain k-cores. To the best of our knowledge, the algorithm
presented by Baur et al. in [8] is the only one completely based on a k-core analysis and
directly targeted at the study of large information networks. This algorithm uses a spectral
layout to place vertices having the largest shell index. A combination of barycentric and
iteratively directed-forces allows to place the vertices of each k-shell, in decreasing order.
Finally, the network is drawn in three dimensions, using the z axis to place each shell in a
distinct horizontal layer. Note that the spectral layout is not able to distinguish two or more
disconnected components. The algorithm by Baur et al. is also tuned for representing AS
graphs and its total complexity depends on the size of the highest k-core (see [9] for more
details on spectral layout), making the computation time of this proposal largely variable.
In this respect, the algorithm presented here is different in that it can represent networks in
which k-cores are composed by several connected components. Another difference is that
representations in 2D are more suited for information visualization than other representations (see [10] and references therein). Finally, the algorithm parameters can be universally
defined, yielding a fast and general tool for analyzing all types of networks.
It is interesting to note that the notion of k-cores has been recently used in biologically
related contexts, where it was applied to the analysis of protein interaction networks [11] or
in the prediction of protein functions [12, 13]. Further applications in Internet-related areas
can be found in [14], where the k-core decomposition is used for filtering out peripheral
Autonomous Systems (ASes), and in [15] where the scale invariant structure of degree
correlations and mapping biases in AS maps is shown. Finally in [16, 17], an interesting
approach based on the k-core decomposition has been used to provide a conceptual and
structural model of the Internet; the so-called medusa model for the Internet.
3
Graphical representation
Let us consider a graph G = (V, E) of |V | = n vertices and |E| = e edges; a k-core is
defined as follows [4]:
-A subgraph H = (C, E|C) induced by the set C ? V is a k-core or a core of order k iff
?v ? C : degreeH (v) ? k, and H is the maximum subgraph with this property.
A k-core of G can therefore be obtained by recursively removing all the vertices of degree
less than k, until all vertices in the remaining graph have at least degree k. Furthermore,
we will use the following definitions:
-A vertex i has shell index c if it belongs to the c-core but not to (c + 1)-core. We denote
by ci the shell index of vertex i.
-A shell Cc is composed by all the vertices whose shell index is c. The maximum value
c such that Cc is not empty is denoted cmax . The k-core is thus the union of all shells Cc
with c ? k.
-Each connected set of vertices having the same shell index c is a cluster Qc . Each shell
c
c
Cc is thus composed by clusters Qcm , such that Cc = ?1?m?qmax
Qcm , where qmax
is the
number of clusters in Cc .
The visualization algorithm we propose places vertices in 2 dimensions, the position of
each vertex depending on its shell index and on the index of its neighbors. A color code
allows for the identification of shell indices, while the vertex?s original degree is provided
by its size that depends logarithmically on the degree. For the sake of clarity, our algorithm
represents a small percentage of the edges, chosen uniformly at random. As mentioned, a
central role in our visualization method is played by multi-components representation of kcores. In the most general situation, indeed, the recursive removal of vertices having degree
less than a given k can break the original network into various connected components,
each of which might even be once again broken by the subsequent decomposition. Our
method takes into account this possibility, however we will first present the algorithm in
the simplified case, in which none of the k-cores is fragmented. Then, this algorithm will
be used as a subroutine for treating the general case (Table 1).
3.1
Drawing algorithm for k-cores with single connected component
k-core decomposition. The shell index of each vertex is computed and stored in a vector
C, along with the shells Cc and the maximum index cmax . Each shell is then decomposed
into clusters Qcm of connected vertices, and each vertex i is labeled by its shell index ci and
by a number qi representing the cluster it belongs to.
The two dimensional graphical layout. The visualization is obtained assigning to each
vertex i a couple of polar coordinates (?i , ?i ): the radius ?i is a function of the shell index
of the vertex i and of its neighbors; the angle ?i depends on the cluster number qi . In
this way, k-shells are displayed as layers with the form of circular shells, the innermost
one corresponding to the set of vertices with highest shell index. A vertex i belongs to the
cmax ? ci layer from the center.
More precisely, ?i is computed according to the following formula:
X
(cmax ? cj ) ,
?i = (1 ? )(cmax ? ci ) +
|Vcj ?ci (i)|
j?Vcj ?ci (i)
(1)
Vcj ?ci (i) is the set of neighbors of i having shell index cj larger or equal to ci . The parameter controls the possibility of rings overlapping, and is one of the only three external
parameters required to tune image?s rendering.
Inside a given shell, the angle ?i of a vertex i is computed as follow:
X |Qm |
|Qqi |
|Qqi |
?i = 2?
+N
, ??
,
|Cci |
2|Cci |
|Cci |
(2)
1?m<qi
where Qqi and Cci are respectively the cluster qi and ci -shell the vertex belongs to, N
is a normal distribution of mean |Qqi |/(2|Cci |) and width 2?|Qqi |/|Cci |. Since we are
interested in distinguishing different clusters in the same shell, the first term on the right
side of Eq. 2, referring to clusters with m < qi , allows to allocate a correct partition of
the angular sector to each cluster. The second term on the right side of Eq. 2, on the other
hand, specifies a random position for the vertex i in the sector assigned to the cluster Qqi .
Colors and size of vertices. Colors depend on the shell index: vertices with shell index
1 are violet, and the maximum shell index vertices are red, following the rainbow color
scale. The diameter of each vertex corresponds to the logarithm of its degree, giving a
further information on vertex?s properties. The vertices with largest shell index are placed
uniformly in a disk of radius u, which is the unit length (u = 1 for this reduced algorithm).
3.2
Extended algorithm for networks with many k-cores components
The algorithm presented in the previous section can be used as the basic routine to define
an extended algorithm aimed at the visualization of networks for which some k-cores are
fragmented; i.e. made by more than one connected component. This issue is solved by
assigning to each connected component of a k-core a center and a size, which depends on
the relative sizes of the various components. Larger components are put closer to the global
center of the representation (which has Cartesian coordinates (0, 0)), and have larger sizes.
The algorithm begins with the center at the origin (0, 0). Whenever a connected component
of a k-core, whose center p had coordinates (Xp , Yp ), is broken into several components by
removing all vertices of degree k, i.e. by applying the next decomposition step, a new center
is computed for each new component. The center of the component h has coordinates
(Xh , Yh ), defined by
Xh = Xp +?(cmax ?ch )?up ?%h ?cos(?h ) ; Yh = Yp +?(cmax ?ch )?up ?%h ?sin(?h ) , (3)
where ? scales the distance between components, cmax is the maximum shell index and ch
is the core number of component h (the components are numbered by h = 1, ? ? ? , hmax in
an arbitrary order), up is the unit length of its parent component, %h and ?h are the radial
and angular coordinates of the new center with respect to the parent center (Xp , Yp ). We
define %h and ?h as follows:
%h = 1 ? P
|Sh |
1?j?hmax
|Sj |
; ?h = ?ini + P
2?
1?j?hmax
X
|Sj |
|Sj | ,
(4)
1?j?h
P
where Sh is the set of vertices in the component h, j |Sj | is the sum of the sizes of all
components having the same parent component. In this way, larger components will be
closer to the original parent component?s center p. The angle ?h has two contributions.
The initial angle ?ini is chosen uniformly at random1 , while the angle sector is the sum of
component angles whose number is less than or equal to the actual component number h.
1
Note that if ?ini is fixed, all the centers of the various components are aligned in the final
representation.
Algorithm 1
1 k := 1 and end := false
2 while not end do
3
(end, C)?make core k
4
(Q, T )?compute clusters k ? 1, if k > 1
5
S? compute components k
6
(X, Y )?compute origin coordinates cmp k (Eqs. from 3 to 4)
7
U ?compute unit size cmp k (Eq. 5)
8
k := k + 1
9 for each node i do
10
if ci == cmax then
11
set ?i and ?i according to a uniform distribution in the disk of radius u (u is the core
representation unit size)
12
else
13
set ?i and ?i according to Eqs. 1 and 2
14 (X , Y)?compute final coordinates ? ? U X Y (Eq. 6)
Table 1: Algorithm for the representation of networks using k-cores decomposition
Finally, the unit length uh of a component h is computed as
uh = P
|Sh |
1?j?hmax
|Sj |
? up ,
(5)
where up is the unit length of its parent component. Larger unit length and size are therefore
attributed to larger components.
For each vertex i, radial and angular coordinates are computed by equations 1 and 2 as
in the previous algorithm. These coordinates are then considered as relative to the center
(Xh , Yh ) of the component to which i belongs. The position of i is thus given by
xi = Xh + ? ? uh ? ?i ? cos(?i ); yi = Yh + ? ? uh ? ?i ? sin(?i )
(6)
where ? is a parameter controlling the component?s diameter.
The global algorithm is formally presented in Table 1. The main loop is composed by the following functions. First, the function {(end, C) ?make core k}
recursively removes all vertices of degree k ? 1, obtaining the k-core, and stores
into C the shell index k ? 1 of the removed vertices. The boolean variable end
is set to true if the k-core is empty, otherwise it is set to f alse. The function
{(Q, T ) ? compute clusters k ? 1} operates the decomposition of the (k ? 1)shell into clusters, storing for each vertex the cluster label into the vector Q, and filling
P table T , which is indexed by the shell index c and cluster label q: T (c, q) =
( 1?m<q |Qm |/|Cc |, |Qq |/|Cc |). The possible decomposition of the k-core into connected components is determined by function {S ? compute components k}, that
also collects into a vector S the number of vertices contained in each component. At the
following step, functions {(X, Y ) ?compute origin coordinates cmp k} and
{U ?compute unit size cmp k} get, respectively, the center and size of each component of the k-core, gathering them in vectors X, Y and U . Finally, the coordinates of
each vertex are computed and stored in the vectors X and Y.
Algorithm complexity. Batagelj and Zversnik [18] present an algorithm to perform the
k-core decomposition, and show that its time complexity is O(e) (where e is the number
of edges) for a connected graph. For a general graph it is O(n + e), where n is the number
of nodes, which makes the algorithm very efficient for sparse graphs where e is of order n.
shell index
shell index
kmax?1
kmax?1
kmax
degree
3
kmin+1
kmax
degree
kmin
3
10
10
d_max
d_max
kmin+1
kmin
Figure 1: Structure of a typical layout in two important cases: on the left, all k-cores are
connected; on the right, some k-cores are composed by more than one connected component. The vertices are arranged in a series of concentric shells corresponding to the various
k-shells. The diameter of each shell depends on both the shell index and, in case of multiple components (right) also on the relative fraction of vertices belonging to the different
components.
3.3
Basic features of the visualization?s layout
The main features of the layout?s structure obtained with the above algorithms are visible
in Fig.1 where, for the sake of simplicity, we do not show any edge.
The two-dimensional layout is composed of a series of concentric circular shells. Each
shell corresponds to a single shell index and all vertices in it are therefore drawn with the
same color. A color scale allows to distinguish different shell indices: the violet is used
for the minimum shell index kmin , then we use a graduated rainbow scale for higher and
higher shell indices up to the maximum value kmax that is colored in red. The diameter
of each k-shell depends on the shell index k, and is proportional to kmax ? k (In Fig.1,
the position of each shell is identified by a circle having the corresponding diameter). The
presence of a trivial order relation in the shell indices ensures that all shells are placed in
a concentric arrangement. On the other hand, when a k-core is fragmented in two or more
components, the diameters of the different components depend also on the relative number
of vertices belonging to each of them, i.e. the fraction between the number of vertices
belonging to that component and the total number of vertices in that core. This is a very
important information, providing a way to distinguish between multiple components at a
given shell index. Finally, the size of each node is proportional to the original degree of
that vertex; we use a logarithmic scale for the size of the drawn bullets.
4
Network fingerprinting
The k-core decomposition peels the network layer by layer, revealing the structure of the
different shells from the outmost one to the more internal ones. The algorithm provides
a direct way to distinguish the network?s different hierarchies and structural organization
by means of some simple quantities: the radial width of the shells, the presence and size
of clusters of vertices in the shells, the correlations between degree and shell index, the
distribution of the edges interconnecting vertices of different shells, etc.
1) Shells Width: The thickness of a shell depends on the shell index properties of the
neighbors of the vertices in the corresponding shell. For a given shell-diameter (black
circle in the median position of shells in Fig.2), each vertex can be placed more internal or
more external with respect to this reference. Nodes with more neighbors in higher shells are
closer to the center and viceversa: in Fig.2, node y is more internal than node x because it
Node y has more neighbors in
the higher cores than node x.
node x
node y
shell index
shell index
kmax?1
kmax?1
kmin+1
degree
3
kmax
kmax
kmin
3
10
10
d_max
d_max
The thickness of the shell depends on
the neighbors with higher coreness.
kmin+1
degree
kmin
Isolated nodes
Clusters: nodes connected with
nodes in the same shell.
Figure 2: Left: each shell has a certain radial width. This width depends on the correlation?s
properties of the vertices in the shell. In the second shell, we have pinpointed two nodes
x and y. Node y is more internal than x because a larger part of its neighbors belongs to
higher k-shells compared to x?s neighbors. The figure on the right shows the clustering
properties of nodes in the same k-shell. In each k-shell, nodes that are directly connected
between them (in the original graph) are drawn close one to the other, as in a cluster. Some
of these sets of nodes are circled and highlighted in gray. Three examples of isolated nodes
are also indicated; these nodes have no connections with the others of the same shell.
has three edges towards higher index nodes, while x has only one. The maximum thickness
of the shells is controlled by the parameter (Eq. 1).
2) Shell Clusters: The angular distribution of vertices in the shells is not completely homogeneous. Fig.2 shows that clusters of vertices can be observed. The idea is to group
together all nodes of the same shell that are directly linked in the original graph and to
represent them close one to another. Thus, a shell is divided in many angular sectors, each
containing a cluster of vertices. This feature allows to figure out at a glance if the shells
are composed of a single large connected component rather than divided into many small
clusters, or even if there are isolated vertices (i.e. disconnected from all other nodes in the
shell, not from the rest of the k-core!).
3) Degree-Shell index Correlation: Another property that can be studied from the obtained
layouts is the correlation between the degree of the nodes and the shell index. Both quantities are centrality measures and the nature of their correlations is a very important feature
characterizing a network?s topology. The nodes displayed in the most internal shells are
those forming the central core of the network; the presence of degree-index correlations
then corresponds to the fact that the central nodes are most likely high-degree hubs of the
network. This effect is observed in many real communication networks with a clear hierarchical structure, such as the Internet at the Autonomous System level or the World Wide
Air-transportation network [5]. On the contrary, the presence of hubs in external shells is
typical of less hierarchically structured networks such as the World-Wide Web or the Internet Router Level. In this case, star-like configurations appear with high degree vertices
connected only to very low degree vertices. These vertices are rapidly pruned out in the
k-core decomposition even if they have a very high degree, leading to the presence of local
hubs in external shells, as in Fig. 3.
4) Edges: The visualization shows only a homogeneously randomly sampled fraction of
the edges, which can be tuned in order to get the better trade-off between the clarity of
visualization and the necessity of giving information on the way the nodes are mainly connected. Edge-reduction techniques can be implemented to improve the algorithm?s capacity
in representing edges; however, a homogeneous sampling does not alter the extraction of
topological information, ensuring a low computational cost. Finally, the two halves of each
Degree and shell index are correlated
but with large fluctuations.
The degree is strongly correlated
with the shell index.
shell index
shell index
kmax?1
kmax?1
kmax
degree
3
kmin+1
kmin
kmax
degree
3
10
10
d_max
d_max
kmin+1
kmin
Figure 3: Correlations between shell index and degree. On the left, we report a graph
with strong correlation: the size of the nodes grows from the periphery to the center, in
correspondence with the shell index. In the right-hand case, the degree-index correlations
are blurred by large fluctuations, as stressed by the presence of hubs in the external shells.
edge are colored with the color of the corresponding extremities to emphasize the connection among vertices in different shells.
5) Disconnected components: The fragmentation of any given k-core in two or more disconnected components is represented by the presence of a corresponding number of circular
shells with different centers (Fig. 1). The diameter of these circles is related with the number of nodes of each component and modulated by the ? parameter (Eq. 6). The distance
between components is controlled by the ? parameter (Eq. 3).
In summary, the proposed algorithm makes possible a direct, visual investigation of a series
of properties: hierarchical structures of networks, connectivity and clustering properties inside a given shell; relations and interconnectivity between different levels of the hierarchy,
correlations between degree and shell index, i.e. between different measures of centrality.
Numerous examples of the application of this tool to the visualization of real and computer
generated networks can be found on the web page of the publicly available tool [5]. For
example, the lack of hierarchy and structure of the Erd?os-R?enyi random graph is clearly
identified. Similarly the time correlations present in the Barab?asi-Albert network find a
clear fingerprint in our visualization layout. Here we display another interesting illustration of the use and capabilities of the proposed algorithm in the analysis of large sparse
graphs: the identification of the different hierarchical arrangement of the Internet network
when visualized at the Autonomous system (AS) and the Router (IR) levels 2 . The AS
level is represented by collected routes of Oregon route-views [19] project, from May 2001.
For the IR level, we use the graph obtained by an exploration of Govindan and Tangmunarunkit [20] in 2000. These networks are composed respectively by about 11500 and
200000 nodes.
Figures 4 and 5 display the representations of these two different maps of Internet. At
the AS level, all shells are populated, and, for any given shell, the vertices are distributed
on a relatively large range of the radial coordinate, which means that their neighborhoods
are variously composed. The shell index and the degree are very correlated, with a clear
hierarchical structure, and links go principally from one shell to another. The hierarchical
structure exhibited by our analysis of the AS level is a striking property; for instance, one
might exploit it for showing that in the Internet high-degree vertices are naturally (as an
implicit result of the self-organizing growth) placed in the innermost structure. At higher
resolution, i.e. at the IR level, Internet?s properties are less structured: external layers, of
2
The parameters are here set to the values = 0.18, ? = 1.3 and ? = 1.5.
lowest shell index, contain vertices with large degree. For instance, we find 20 vertices with
degree larger than 100 but index smaller than 6. The correlation between shell index and
degree is thus clearly of a very different nature in the maps of Internet obtained at different
granularities.
Figure 4: Graphical representation of the AS network. The three snapshots correspond to
the full network (top left), with the color scale of the shell index and the size scale for the
nodes? degrees, and to two magnifications showing respectively a more central part (top
right) and a radial slice of the layout (bottom).
5
Conclusions
Exploiting k-core decomposition, and the corresponding natural hierarchical structures, we
develop a visualization algorithm that yields a layout encoding a considerable amount of
the information needed for network fingerprinting in the simplicity of a 2D representation.
One can easily read basic features of the graph (degree, hierarchical structure, etc.) as well
as more entangled features, e.g. the relation between a vertex and the hierarchical position
of its neighbors. The present visualization strategy is a useful tool to discriminate between
networks with different topological properties and structural arrangement, and may be also
used for comparison of models with real data, providing a further interesting tool for model
Figure 5: Same as Fig. 4, for the graphical representation of the IR network.
validation. Finally, we also provide a publicly available tool for visualizing networks [5].
Acknowledgments: This work has been partially funded by the European Commission Fet Open project COSIN IST-2001-33555 and contract 001907 (DELIS).
References
[1] R. Albert and A.-L. Barab?asi, ?Statistical mechanics of complex networks,? Rev. Mod. Phys. 74,
pp. 47?97, 2000.
[2] S. N. Dorogovtsev and J. F. F. Mendes, Evolution of networks: From biological nets to the
Internet and WWW, Oxford University Press, 2003.
[3] R. Pastor-Satorras and A. Vespignani, Evolution and structure of the Internet: A statistical
physics approach, Cambridge University Press, 2004.
[4] V. Batagelj and M. Zaversnik, ?Generalized Cores,? cs.DS/0202039 , 2002.
[5] LArge NETwork VIsualization tool.
http://xavier.informatics.indiana.edu/lanet-vi/.
[6] http://http://i11www.ira.uka.de/cosin/tools/index.php.
[7] V. Batagelj, A. Mrvar, and M. Zaversnik, ?Partitioning Approach to Visualization of Large
Networks,? in Graph Drawing ?99, Castle Stirin, Czech Republic, LNCS 1731, pp. 90?98, 1999.
[8] M. Baur, U. Brandes, M. Gaertler, and D. Wagner, ?Drawing the AS Graph in 2.5 Dimensions,?
in ?12th International Symposium on Graph Drawing, Springer-Verlag editor?, pp. 43?48,
2004.
[9] U. Brandes and S. Cornelsen, ?Visual Ranking of Link Structures,? Journal of Graph Algorithms and Applications 7(2), pp. 181?201, 2003.
[10] B. Shneiderman, ?Why not make interfaces better than 3d reality?,? IEEE Computer Graphics
and Applications 23, pp. 12?15, November/December 2003.
[11] G. D. Bader and C. W. V. Hogue, ?An automated method for finding molecular complexes in
large protein interaction networks,? BMC Bioinformatics 4(2), 2003.
[12] M. Altaf-Ul-Amin, K. Nishikata, T. Koma, T. Miyasato, Y. Shinbo, M. Arifuzzaman, C. Wada,
M. Maeda, T. Oshima, H. Mori, and S. Kanaya, ?Prediction of Protein Functions Based on
K-Cores of Protein-Protein Interaction Networks and Amino Acid Sequences,? Genome Informatics 14, pp. 498?499, 2003.
[13] S. Wuchty and E. Almaas, ?Peeling the Yeast protein network,? Proteomics. 2005 Feb;5(2):4449. 5(2), pp. 444?449, 2005.
[14] M. Gaertler and M. Patrignani, ?Dynamic Analysis of the Autonomous System Graph,? in IPS
2004, International Workshop on Inter-domain Performance and Simulation, Budapest, Hungary, pp. 13?24, 2004.
[15] I. Alvarez-Hamelin, L. Dall?Asta, A. Barrat, and A. Vespignani, ?k-core decomposition: a tool
for the analysis of large scale internet graphs,? cs.NI/0511007 .
[16] S. Carmi, S. Havlin, S. Kirkpatrick, Y. Shavitt, and E.
http://www.cs.huji.ac.il/?kirk/Jellyfish_Dimes.ppt.
Shir,
2005.
[17] S. Carmi, S. Havlin, S. Kirkpatrick, Y. Shavitt, and E. Shir, ?Medusa - new model of internet
topology using k-shell decomposition,? cond-mat/0601240 .
[18] V. Batagelj and M. Zaversnik, ?An O(m) Algorithm for Cores Decomposition of Networks,?
cs.DS/0310049 , 2003.
[19] University of Oregon Route Views Project. http://www.routeviews.org/.
[20] R. Govindan and H. Tangmunarunkit, ?Heuristics for Internet Map Discovery,? in IEEE INFOCOM 2000, pp. 1371?1380, IEEE, (Tel Aviv, Israel), March 2000.
| 2789 |@word disk:2 open:1 simulation:1 decomposition:22 innermost:2 recursively:3 reduction:1 necessity:1 configuration:1 series:3 initial:1 tuned:2 assigning:2 router:2 visible:1 partition:1 subsequent:1 remove:1 treating:1 progressively:1 half:1 core:56 granting:1 colored:2 infrastructure:1 characterization:1 provides:1 node:31 org:1 along:3 direct:2 symposium:1 consists:1 inside:2 inter:1 indeed:1 mechanic:1 multi:1 sud:3 decreasing:1 decomposed:1 actual:1 project:4 provided:1 begin:1 lowest:1 israel:1 developed:1 argentina:1 indiana:3 finding:1 growth:1 universit:3 qm:2 control:1 unit:8 partitioning:1 appear:1 local:1 encoding:1 analyzing:1 extremity:1 oxford:1 fluctuation:2 might:2 black:1 umr:3 therein:1 studied:2 collect:1 co:2 range:1 directed:1 practical:1 acknowledgment:1 recursive:2 union:1 lncs:1 area:1 asi:2 revealing:1 convenient:1 viceversa:1 radial:6 numbered:1 protein:7 get:2 close:2 put:1 context:2 applying:1 kmax:14 www:3 map:4 center:16 transportation:1 layout:15 batagelj:5 go:1 survey:1 resolution:1 qc:1 simplicity:2 identifying:1 array:1 population:1 notion:1 autonomous:4 coordinate:12 qq:1 hierarchy:4 controlling:1 homogeneous:2 us:1 distinguishing:1 origin:3 element:1 logarithmically:1 magnification:1 labeled:1 observed:2 role:1 bottom:1 solved:1 ensures:1 connected:18 trade:1 highest:2 removed:1 alessandro:1 accessing:1 mentioned:1 broken:2 complexity:4 dynamic:1 depend:2 completely:2 uh:4 easily:2 various:5 represented:2 distinct:1 acv:1 effective:1 fast:1 enyi:1 neighborhood:1 whose:4 heuristic:1 larger:11 drawing:4 otherwise:1 analyse:1 highlighted:1 final:2 seemingly:1 ip:1 interplay:1 sequence:1 net:1 propose:2 interaction:3 fr:3 aligned:1 loop:1 budapest:1 rapidly:1 hungary:1 organizing:1 subgraph:2 iff:1 amin:1 scalability:1 exploiting:1 parent:5 empty:2 cluster:23 ring:1 depending:1 develop:3 ac:1 school:1 eq:9 strong:1 implemented:1 c:4 radius:3 correct:1 attribute:1 bader:1 exploration:1 asta:2 adjacency:1 investigation:2 biological:1 inspecting:1 considered:1 normal:1 lpt:3 mapping:2 satorras:1 polar:1 label:2 largest:2 interconnectivity:1 tool:13 clearly:3 rather:1 cmp:4 ira:1 dall:2 mainly:2 lri:1 cnrs:3 havlin:2 relation:3 france:3 subroutine:1 interested:1 issue:1 among:1 denoted:1 equal:3 once:1 having:6 extraction:1 sampling:1 bmc:1 represents:1 filling:1 alter:1 others:1 report:1 aire:2 few:1 ppt:1 randomly:1 composed:9 variously:1 psud:2 oshima:1 organization:2 interest:2 peel:1 possibility:3 mining:1 circular:3 kirkpatrick:2 sh:3 yielding:1 devoted:1 mrvar:1 edge:12 closer:3 indexed:1 logarithm:1 circle:3 isolated:3 theoretical:1 instance:2 modeling:1 boolean:1 cost:1 violet:2 wada:1 republic:1 introducing:1 vertex:67 subset:1 cci:6 uniform:1 graphic:1 stored:2 commission:1 thickness:3 referring:1 geographical:1 epidemiology:1 international:2 huji:1 contract:1 physic:1 universidad:1 informatics:3 off:1 together:1 connectivity:2 again:1 central:6 kanaya:1 containing:1 external:6 castle:1 leading:1 yp:3 account:1 de:6 star:1 blurred:1 oregon:2 explicitly:1 ranking:1 depends:9 vi:1 break:1 view:2 vespignani:3 infocom:1 linked:1 traffic:2 red:2 capability:1 contribution:1 air:1 ni:1 php:1 publicly:3 ir:4 largely:1 acid:1 il:1 correspond:2 yield:1 identification:3 none:1 economical:1 cc:9 reach:1 phys:1 whenever:1 definition:4 pp:9 vcj:3 naturally:1 attributed:1 couple:1 sampled:1 bloomington:1 knowledge:1 color:8 organized:1 cj:2 routine:1 uncover:1 focusing:1 appears:1 higher:8 follow:1 alvarez:3 erd:1 arranged:1 strongly:1 furthermore:1 angular:5 implicit:1 until:2 correlation:14 hand:3 d:2 horizontal:1 web:3 o:1 overlapping:1 lack:1 glance:1 gray:1 indicated:1 bullet:1 grows:2 yeast:1 aviv:1 usa:1 effect:1 contain:1 true:1 pinpointed:1 evolution:2 assigned:1 xavier:1 read:1 brandes:2 iteratively:1 visualizing:1 sin:2 width:5 self:1 fet:1 generalized:1 ini:3 interface:1 image:1 recently:1 physical:1 million:1 cambridge:1 populated:1 similarly:1 fingerprint:3 had:1 funded:1 etc:2 feb:1 disentangle:1 recent:1 belongs:6 pastor:1 periphery:1 store:1 certain:2 route:3 verlag:1 carmi:2 yi:1 minimum:1 multiple:2 full:1 luca:2 divided:2 molecular:1 controlled:2 qi:5 prediction:2 ensuring:1 basic:6 barab:2 proteomics:1 albert:2 represent:2 achieved:1 proposal:1 addition:1 kmin:13 entangled:1 else:1 median:1 uka:1 rest:1 exhibited:1 cedex:3 induced:1 december:1 contrary:1 flow:1 mod:1 orsay:3 structural:6 presence:7 granularity:1 rendering:1 automated:1 d_max:6 graduated:1 architecture:1 topology:3 identified:2 hindered:1 idea:1 allocate:1 ul:1 effort:1 medusa:2 useful:1 clear:3 aimed:2 tune:1 amount:1 visualized:1 diameter:8 reduced:1 http:5 specifies:1 percentage:1 deli:1 lanet:1 mat:1 group:1 ist:1 drawn:4 clarity:2 graph:21 fraction:3 sum:2 barrat:3 angle:6 qmax:2 striking:1 place:4 cosin:2 layer:6 internet:16 followed:1 distinguish:4 played:1 correspondence:1 display:2 topological:3 precisely:1 sake:2 pruned:1 relatively:1 structured:2 according:4 peripheral:1 combination:1 disconnected:4 march:1 belonging:3 smaller:2 rev:1 making:1 biologically:1 alse:1 invariant:1 handling:1 gathering:1 principally:1 mori:1 equation:1 visualization:29 describing:1 needed:2 end:5 available:3 hierarchical:12 spectral:3 homogeneously:1 centrality:4 original:6 top:2 remaining:2 running:1 clustering:2 graphical:6 cmax:9 exploit:1 giving:2 arrangement:4 quantity:2 strategy:2 distance:2 link:2 capacity:1 collected:1 trivial:1 code:1 length:5 index:54 illustration:1 providing:2 vladimir:1 sector:4 perform:1 outmost:1 snapshot:1 datasets:1 govindan:2 november:1 displayed:2 situation:1 extended:2 communication:1 barycentric:1 arbitrary:1 fingerprinting:4 concentric:3 shavitt:2 paris:3 required:1 connection:2 distinction:1 czech:1 able:3 pattern:1 maeda:1 challenge:1 suitable:1 critical:1 natural:1 force:1 representing:3 improve:1 brief:1 numerous:1 ignacio:2 axis:1 circled:1 discovery:1 removal:1 relative:4 highlight:1 interesting:4 filtering:1 proportional:2 validation:1 degree:40 offered:1 xp:3 editor:1 storing:1 summary:1 placed:4 alain:2 bias:1 side:2 wide:3 neighbor:10 face:1 characterizing:2 wagner:1 sparse:4 distributed:1 fragmented:3 rainbow:2 dimension:3 slice:1 world:3 genome:1 author:1 made:1 universally:1 simplified:1 social:2 cope:1 sj:5 pruning:1 emphasize:1 global:2 conceptual:1 xi:1 why:1 table:4 reality:1 nature:2 correlated:3 obtaining:1 tel:1 du:3 as:1 complex:5 european:1 domain:1 main:2 hierarchically:1 linearly:1 amino:1 fig:8 interconnecting:1 position:7 xh:4 col:1 yh:4 kirk:1 hmax:4 peeling:1 removing:3 formula:1 specific:1 hub:4 showing:2 workshop:1 false:1 ci:10 fragmentation:1 cartesian:1 suited:1 logarithmic:1 likely:1 forming:1 visual:2 contained:1 partially:1 springer:1 ch:3 corresponds:3 chance:1 shell:104 targeted:1 towards:1 considerable:1 determined:1 typical:2 uniformly:3 operates:1 called:2 total:2 discriminate:1 cond:1 formally:1 internal:5 stressed:1 modulated:1 buenos:2 bioinformatics:1 mendes:1 |
1,969 | 279 | Note on Development or Modularity in Simple Cortical Models
Note on Development of Modularity
in Simple Cortical Models
Alex Chernjavskyl
Neuroscience Graduate Program
Section of Molecular Neurobiology
Howard Hughes Medical Institute
Yale University
John Moody2
Yale Computer Science
PO Box 2158 Yale Station
New Haven, CT 06520
Email: [email protected]
ABSTRACT
The existence of modularity in the organization of nervous systems
(e.g. cortical columns and olfactory glomeruli) is well known. We
show that localized activity patterns in a layer of cells, collective
excitations, can induce the formation of modular structures in the
anatomical connections via a Hebbian learning mechanism. The
networks are spatially homogeneous before learning, but the spontaneous emergence of localized collective excitations and subsequently modularity in the connection patterns breaks translational
symmetry. This spontaneous symmetry breaking phenomenon is
similar to those which drive pattern formation in reaction-diffusion
systems. We have identified requirements on the patterns of lateral
connections and on the gains of internal units which are essential
for the development of modularity. These essential requirements
will most likely remain operative when more complicated (and biologically realistic) models are considered.
1 Present Address: Molecular and Cellular Physiology, Beckman Center, Stanford University,
Stanford, CA 94305.
2 Please address correspondence to John Moody.
133
134
Chernjavsky and Moody
1
Modularity in Nervous Systems
Modular organization exists throughout the nervous system on many different spatial scales. On the very small scale, synapses appear to be clustered on dendrites.
On the very large scale, the brain as a whole is composed of many anatomically
and functionally distinct regions. At intermediate scales, the scales of networks and
maps, the brain exhibits columnar structures.
The purpose of this work is to suggest possible mechanisms for the development
of modular structures at the intermediate scales of networks and maps. The best
known modular structure at this scale is the column. Many modality- specific
variations of columnar organization are known, for example orientation selective
columns, ocular dominance columns, color sensitive blobs, somatosensory barrels,
and olfactory glomeruli. In addition to these anatomically well-established structures, other more speculative modular anatomical structures may exist. These
include the frontal eye fields of association cortex whose modular structure is inferred only from electrophysiology and the hypothetical existence of minicolumns
and possibly neuronal groups.
Although a complete biophysical picture of the development of modular structures
is still unavailable, it is well established that electrical activity is crucial for the
development of certain modular structures such as complex synaptic zones and ocular dominance columns (see Kalil 1989 and references therein). It is also generally
conjectured that a Hebb-like mechanism is operative in this development. These
observations form a basis for our operating hypothesis described below.
2
Operating Hypothesis and Modeling Approach
Our hypothesis in this work is that localized activity patterns in a layer of cells
induce the development of modular anatomical structure within the layer. We
further hypothesize that the emergence of localized activity patterns in a layer is
due to the properties of the intrinsic network dynamics and does not necessarily
depend upon the system receiving localized patterns of afferent activity.
Our work therefore has two parts. First, we show that localized patterns of activity on a preferred spatial scale, collective excitations, spontaneously emerge in
homogeneous networks with appropriate lateral connectivity and cellular response
properties when driven with arbitrary stimulus (see Moody 1990). Secondly, we
show that these collective excitations induce the formation of modular structures
in the connectivity patterns when coupled to a Hebbian learning mechanism.
The emergence of collective excitations at a preferred spatial scale in a homogeneous
network breaks translational symmetry and is an example of spontaneous symmetry
breaking. The Hebbian learning freezes the modular structure into the anatomy.
The time scale of collective excitations is short, while the Hebbian learning process
occurs over a longer time scale. The spontaneous symmetry breaking mechanism is
similar to that which drives pattern formation in reaction-diffusion systems (Turing
1952, Meinhardt 1982). Reaction-diffusion models have been applied to pattern for-
Note on Development or Modularity in Simple Cortical Models
Fleceplar
Unit.
E>r:htory
Units
internol
Unils
A
B
Inhillilary
Units
Figure 1: Network Models. A: Additive Model. B: Shunting Inhibition Model.
Artwork after Pearson et al. (1987).
mation in both biological and physical systems. One of the best known applications
is to the development of zebra stripes and leopard spots. Also, a network model
with dynamics exhibiting spontaneous symmetry breaking has been proposed by
Cowan (1982) to explain geometrical visual hallucination patterns.
Previous work by Pearson et al. (1987) demonstrated empirically that modularity
emerged in simulations of an idealized but rather complex model of somatosensory
cortex. The Pearson work was purely empirical and did not attempt to analyze
theoretically why the modules developed. It provided an impetus, however, for our
developing the theoretical results which we present here and in Moody (1990).
Our work is thus intended to provide a possible theoretical foundation for the development of modularity. We have limited our attention to simple models which
we can analyze mathematically in order to identify the essential requirements for
the formation of modules. To convince ourselves that both collective excitations
and the consequent development of modules are somewhat universal, we have considered several different network models. All models exhibit collective excitations.
We believe that more biologically realistic (and therefore more complicated) models
will very likely exhibit similar behaviors.
This paper is a substantially abbreviated version of Chernjavsky and Moody (1990).
3
Network Dynamics: Collective Excitations
The analysis of network dynamics presented in this section is adapted from Moody
(1990). Due to space limitations, we present here a detailed analysis of only the
simplest model which exhibits collective excitations.
All network models which we consider possess a single layer of receptor cells which
provide input to a single internal layer of laterally-connected cells. Two general
classes of models are considered (see figure 1): additive models and shunting inhibition models. The additive models contain a single population of internal cells
which make both lateral excitatory and inhibitory connections. Both connection
types are additive. The shunting inhibition models have two populations of cells
in the internal layer: excitatory cells which make additive synaptic axonal contact
with other cells and inhibitory cells which shunt the activities of excitatory cells.
135
136
Chernjavsky and Moody
0.1
A:
Lateral Connection Pattern.
B: W nirication Factor.
~
i
0.04
I
I
~
0.1
0.01
?
I
j
?
1
i
a
... &
"
?
,I!
I.
fi'
II
!i!!
0.00
i
II
I
..
0.0
OS
I
t
J.
LOb'
10'
b'
~
I
1
S
li
!
l
t.
!
0
:l
JIG"
1.01
I
-0.111
-0.1
-10
0
a.laU.. e.1l ......_
10
10
IlpatW 1 _ (11...._
of c.u.)
Figure 2: A: Excitatory, Inhibitory, and Difference of Gaussian Lateral Connection
Patterns. B: Magnification Functions for the Linear Additive Model.
The additive models are further subdivided into models with linear internal units
and models with nonlinear (particularly sigmoidal) internal units. The shunting
inhibition models have linear excitatory units and sigmoidal inhibitory units. We
have considered two variants of the shunting models, those with and without lateral
excitatory connections.
For simplicity and tractability, we have limited the use of nonlinear response functions to at most one cell population in all models. More elaborate network models
could make greater use of nonlinearity, a greater variety of cell types (eg. dis inhibitory cells), and use more ornate connectivity patterns. However, such additional
structure can only add richness to the network behavior and is not likely to remove
the collective excitation phenomenon.
3.1
Dynamics for the Linear Additive Model
To elucidate the fundamental requirements for the spontaneous emergence of collective excitations, we now focus on the minimal model which exhibits the phenomenon, the linear additive model. This model is exactly solvable.
As we will see, collective excitations will emerge provided that the appropriate
lateral connectivity patterns are present and that the gains of the internal units are
sufficiently high. These basic requirements will carryover to the nonlinear additive
and shunting models.
The network relaxation equations for the linear additive model are:
(1)
where Rj and Ej are the activities (firing rates) of the
ph
receptor and internal
Note on Development of Modularity in Simple Cortical Models
cells respectively, Vt is the somatic potential of the ith internal cell, Wij" and
are the afferent and lateral connections respectively, and Td is the dynamical
relaxation time. The somatic potentials and firing rates of the internal units are
(Vt - O)/f. where 0 is an offset or threshold and c 1 is the
linearly related by Ei
gam.
Wit
=
The steady state solutions of the network equations can be solved exactly by reformulating the problem in the continuum limit (i H- x):
Td
~ V(x) = -V(x) + A(x) +
A(x)
J
dy wlat(x - y)E(y)
=Jdy waf! (x - y)R(y)
(2)
(3)
The functions R(y) and E(y) are activation densities in the receptor and internal
layers respectively. A(x) is the integrated input activation density to the internal
layer. The functions waJf (x - y) and wlat(x - y) are interpreted as connection
densities. Note that the network is spatially homogeneous since the connection
densities depend only on the relative separation of post-synaptic and pre-synaptic
cells (x - y). Examples of lateral connectivity patterns wlat (x - y) are shown
in figure 2A. These include local gaussian excitation, intermediate range gaussian
inhibition, and a scaled difference of gaussians (DOG).
ft
The exact stationary solution V(x) = 0 of the continuum dynamics of equation 2
can be computed by fourier transforming the equations to the spatial frequency
domain. The solution thereby obtained (for () = 0) is E(k) = M(k)A(k), where the
variable k is the spatial frequency and !l1(k) is the network magnification function:
M(k)
=
1
f. _
Wlat(k)'
(4)
Positive magnification factors correspond to stable modes. When the magnification
function is large and positive, the network magnifies afferent activity structure on
specific spatial scales. This occurs when the inverse gain f. is sufficiently small
and/or the fourier transform of the pattern of lateral connectivity W 1at (k) has a
peak at a non-zero frequency.
Figure 2B shows magnification functions (plotted as a function of spatial scale
271' / k) corresponding to the lateral connectivity patterns shown in figure 2A for a
network with f. = 1. Note that the gaussian excitatory and gaussian inhibitory
connection patterns (which have total integrated weight ?0.25) magnify structure
at large spatial scales by factors of 1.33 and 0.80 respectively. The scale DOG
connectivity pattern (which has total weight 0) gives rise to no large scale or small
scale magnification, but rather magnifies structure on an intermediate spatial scale
of 17 cells.
We illustrate the response of linear networks with unit gain f. = 1 and different
lateral connectivity patterns in figure 3. The networks correspond to connectivities
137
138
Chernjavsky and Moody
e.D .--~-=B:'---T'Co::;:l1:.:::ec:::.tI:.:cve;:...=Ex:.::.c:..:.:lta::;ti:.::.o:::n.,--,---,
.
.
U
1.110
:!
:!I
fi
I
!-I.o
8i 1.D
~
~
8
1
?
;:
1
?
;:
...
..!i
...
..!i
0.6
0.1i
O'O~_~IOO~--'---~O-~~-r.IOO~
Cen
Number
0.0 '----f.Ioo;::;----'----,o!:---'---,I.-!:::oo~
Cd If'umber
Figure 3: Response of a Linear Network to Random Input. A: Response of neutral (dashed), lateral excitatory (upper solid), and lateral inhibitory (lower solid)
networks. B: Collective excitations (solid) as response to random input (dashed) in
network with DOG lateral connectivity.
and magnification functions shown in figure 2. Part A, shows the response E(x)
of neutral, gaussian excitatory, and gaussian inhibitory networks to net afferent input A(x) generated from a random 1//2 noise distribution. The neutral network
(no lateral connections) yields the identity response to random input; the networks
with the excitatory and inhibitory lateral connection patterns exhibit boosted and
reduced response respectively. Part B shows the emergence of collective excitations
(solid) for the scaled DOG lateral connectivity. The resulting collective excitations
have a typical period of about 17 cells, corresponding to the peak in the magnification function shown in figure 2. Note that the positions of peaks and troughs of
the collective excitations correspond approximately to local extrema in the random
input (dashed) .
It is interesting to note that although the individual components of the networks
are all linear, the overall response of the interacting system is nonlinear. It is
this collective nonlinearity of the system which enables the emergence of collective
excitations. Thus, although the connectivity giving rise to the response in figure 3B
is a scaled sum of the connectivities of the excitatory and inhibitory networks of
figure 3A, the responses themselves do not add.
3.2
Dynamics for Nonlinear Models
The nonlinear models, including the sigmoidal additive model and the shunting
models, exhibit the collective excitation phenomenon as well. These models can
not be solved exactly, however. See Moody (1990) for a detailed description.
Note on Development of Modularity in Simple Cortical Models
Shuntln&
?
N etwork ? Af rerent
A:
I
1.0
.. .- ,
. \. .
r-
~
':"""'
' '
,,
,,",,,
:V'\
,, ,
I
,
1\?
II
,,
I
,
I,
I
.,
r;-,
,
, ,,
,
,
' , ,,
Y'~ , r\ ,
I
I
'
Shun Ung Network. Lateral connectionl
B:
I
~
,
.
~\ (\
It j,
connection.
I
I
I
IV
I
:V
r;
,
,
,
.
I
?, .....,
', ,i.,
' " ,,
,
V\V\
:~
1.0
in
n
,
J
~
'
,
,
,
'\
,,
I~
,,
:/'i ",!;,
I
I.
",
0
20
eo
40
,
liD
,
'-'
f1
I
I
,
,, ,
I
,,
,, ,
I
I
I
I
I
I
I
,,
,
,,
,,
,,
'
I
\1\ V: ~ A
I
I
-
.-
. ",,
,., ,
,
,
A
,
,,
i'\
I,
I,
,,
,
0.0
,,
I,
"0.'
"
A. A
I
II
'
,, ,
,,
,',
, I
.
I
~/
I
I
~~,
,
I :
"
40
"";Ia'-7 Unit lIum .....
!V'
,,
,.''
I
~
i,
\
,'" ,
,, r-{
,
.
eo
Figure 4: Development of Modularity in the Nonlinear Shunting Inhibition Model.
Curves represent the average incoming connection value (either afferent connections
or lateral connections) for each excitatory internal unit. A: Time development of
Afferent Modularity. B: Time development of Lateral Modularity. A and B: 400
iterations (dotted line), 650 iterations (dashed line), 4100 iterations (solid line).
4
Hebbian Learning: Development of Modularity
The presence of collective excitations in the network dynamics enables the development of modular structures in both the afferent and lateral connection patterns via
Hebbian learning. Due to space limitations, we present simulation results only for
the nonlinear shunting modeL We focus on this model since it has both afferent and
lateral plastic connections and thus develops both afferent and lateral modular connectivities. The other models do not have plastic lateral connections and develop
only afferent connectivity modules. A more detailed account of all simulations is
given in Chernjavsky and Moody (1990) .
In our networks, the plastic excitatory connection values are restricted to the range
W E [0,1]. The homogeneous initial conditions for all connection values are W =
0.5. We have considered several variants of Hebbian learning. For the simulations
we report here, however, we use only the simple Hebb rule with decay:
d
iH bbe
dt
w.'J.. =
M '?N?
- f3
J
(5)
where Mi and N j are the post- and pre-synaptic activities respectively and f3 is
the decay constant chosen to be approximately equal to the expected value M N
averaged over the whole network. This choice of f3 makes the Hebb similar to the
covariance type rule of Sejnowski (1977). iHebb is the timescale for learning.
The simulation results illustrated in figure 4 are of one dimensional networks with
64 units per layer. In these simulations, the units and connections illustrated are
139
140
Chernjavsky and Moody
intended to represent a continuum. The connection densities for afferent and lateral excitatory connections were chosen to be gaussian with a maximum fan-out
of 9 lattice units. The inhibitory connection density had a maximum fan-in of 19
lattice units and had a symmetric bimodal shape. The sigmas of the excitatory and
inhibitory fan-ins were respectively 1.4 and 2.1 (short-range excitation and longer
range inhibition). The linear excitatory units had f = 1 and () = 0, while the
sigmoidal inhibitory units had f
0.125 and () 0.5.
=
=
The input activations were uniform random values in the range [0,1]. The input
activations were spatially and temporally uncorrelated. Each input pattern was
presented for only one dynamical relaxation time of the network (10 timesteps).
The following adaptation rate parameters were used: dynamical relaxation rate
Td- 1 = 0.1, learning rate Tii!bb = 0.01, weight decay constant f3 = 0.125.
Acknowledgements
The authors wish to thank George Carman, Martha Constantine-Paton, Kamil Grajski,
Daniel Kammen, John Pearson, and Gordon Shepherd for helpful comments. A.C. thanks
Stephen J Smith for the freedom to pursue projects outside the laboratory. J.M. was supported by ONR Grant N00014-89-J-1228 and AFOSR Grant 89-0478. A.C. was supported
by the Howard Hughes Medical Institute and by the Yale Neuroscience Program.
References
Alex Chernjavsky and John Moody. (1990) Spontaneous development of modularity in
simple cortical models. Submitted to Neural Computation.
Jack D. Cowan. (1982) Spontaneous symmetry breaking in large scale nervous activity.
Inti. J. Quantum Chemistry, 22:1059.
Ronald E. Kalil. (1989) Synapse formation in the developing brain. Scientific American
December.
H. Meinhardt. (1982) Models of Biological Pattern Formation. Academic Press, New York.
John Moody. (1990) Dynamics of lateral interaction networks. Technical report, Yale
University. (In Preparation.)
Vernon B. Mountcastle. (1957) Modality and topographic properties of single neurons of
cat's somatic sensory cortex. Journal of Neurophysiology, 20:408.
John C. Pearson, Leif H. Finkel, and Gerald M. Edelman. (1987) Plasticity in the organization of adult cerebral cortical maps: A computer simulation based on neuronal group
selection. Journal of Neuroscience, 7:4209.
Terry Sejnowski. (1977) Strong covariance with nonlinearly interacting neurons. J. Math.
Bioi. 4:303.
Alan Turing. (1952) The chemical basis of morphogenesis. Phil. Trans. R. Soc., B237:37.
| 279 |@word neurophysiology:1 version:1 simulation:7 covariance:2 thereby:1 solid:5 initial:1 daniel:1 reaction:3 activation:4 john:6 ronald:1 additive:12 realistic:2 plasticity:1 shape:1 enables:2 hypothesize:1 remove:1 etwork:1 stationary:1 nervous:4 ith:1 smith:1 short:2 math:1 sigmoidal:4 edelman:1 olfactory:2 theoretically:1 expected:1 behavior:2 magnifies:2 themselves:1 brain:3 td:3 provided:2 project:1 barrel:1 interpreted:1 substantially:1 pursue:1 developed:1 extremum:1 hypothetical:1 ti:2 laterally:1 exactly:3 scaled:3 unit:19 medical:2 grant:2 appear:1 before:1 positive:2 local:2 limit:1 receptor:3 firing:2 approximately:2 therein:1 shunt:1 co:1 limited:2 graduate:1 range:5 averaged:1 spontaneously:1 hughes:2 spot:1 universal:1 empirical:1 physiology:1 pre:2 induce:3 suggest:1 selection:1 map:3 demonstrated:1 center:1 phil:1 attention:1 wit:1 simplicity:1 rule:2 grajski:1 population:3 variation:1 spontaneous:8 elucidate:1 exact:1 homogeneous:5 hypothesis:3 magnification:8 particularly:1 stripe:1 ft:1 module:4 electrical:1 solved:2 region:1 connected:1 richness:1 transforming:1 dynamic:9 gerald:1 depend:2 purely:1 upon:1 basis:2 po:1 cat:1 paton:1 distinct:1 sejnowski:2 formation:7 pearson:5 outside:1 whose:1 emerged:1 modular:13 stanford:2 topographic:1 timescale:1 emergence:6 transform:1 kammen:1 blob:1 biophysical:1 net:1 interaction:1 adaptation:1 impetus:1 magnify:1 description:1 requirement:5 illustrate:1 oo:1 develop:1 strong:1 soc:1 c:1 somatosensory:2 exhibiting:1 anatomy:1 subsequently:1 shun:1 subdivided:1 f1:1 clustered:1 biological:2 secondly:1 mathematically:1 leopard:1 sufficiently:2 considered:5 cen:1 continuum:3 purpose:1 beckman:1 sensitive:1 gaussian:8 mation:1 rather:2 ej:1 finkel:1 boosted:1 focus:2 helpful:1 integrated:2 selective:1 wij:1 umber:1 translational:2 overall:1 orientation:1 development:20 spatial:9 field:1 equal:1 f3:4 ung:1 carryover:1 stimulus:1 develops:1 haven:1 report:2 gordon:1 composed:1 individual:1 lium:1 intended:2 ourselves:1 attempt:1 freedom:1 organization:4 hallucination:1 iv:1 plotted:1 theoretical:2 minimal:1 column:5 modeling:1 lattice:2 tractability:1 neutral:3 uniform:1 jig:1 jdy:1 convince:1 thanks:1 density:6 fundamental:1 peak:3 receiving:1 moody:14 connectivity:16 possibly:1 american:1 li:1 account:1 potential:2 tii:1 chemistry:1 trough:1 afferent:11 idealized:1 break:2 analyze:2 complicated:2 correspond:3 identify:1 yield:1 plastic:3 drive:2 submitted:1 explain:1 synapsis:1 synaptic:5 email:1 unils:1 frequency:3 ocular:2 mi:1 gain:4 color:1 dt:1 response:12 synapse:1 box:1 ei:1 o:1 nonlinear:8 mode:1 scientific:1 believe:1 contain:1 reformulating:1 spatially:3 symmetric:1 laboratory:1 chemical:1 illustrated:2 eg:1 lob:1 please:1 excitation:22 steady:1 complete:1 l1:2 geometrical:1 jack:1 fi:2 speculative:1 physical:1 empirically:1 cerebral:1 association:1 functionally:1 freeze:1 zebra:1 glomerulus:2 nonlinearity:2 had:4 stable:1 cortex:3 operating:2 longer:2 inhibition:7 add:2 conjectured:1 constantine:1 driven:1 certain:1 n00014:1 cve:1 onr:1 vt:2 greater:2 somewhat:1 kalil:2 additional:1 eo:2 george:1 period:1 dashed:4 ii:4 stephen:1 rj:1 hebbian:7 alan:1 technical:1 academic:1 af:1 post:2 molecular:2 shunting:9 variant:2 basic:1 iteration:3 represent:2 bimodal:1 cell:18 addition:1 operative:2 modality:2 crucial:1 posse:1 shepherd:1 comment:1 cowan:2 december:1 axonal:1 presence:1 intermediate:4 variety:1 timesteps:1 identified:1 york:1 generally:1 detailed:3 ph:1 simplest:1 reduced:1 exist:1 inhibitory:13 dotted:1 neuroscience:3 per:1 anatomical:3 dominance:2 group:2 threshold:1 diffusion:3 relaxation:4 sum:1 turing:2 inverse:1 throughout:1 separation:1 dy:1 layer:10 ct:1 yale:6 correspondence:1 fan:3 activity:11 adapted:1 alex:2 fourier:2 developing:2 remain:1 lid:1 biologically:2 vernon:1 anatomically:2 lau:1 restricted:1 artwork:1 inti:1 equation:4 abbreviated:1 mechanism:5 lta:1 gaussians:1 gam:1 appropriate:2 carman:1 existence:2 include:2 giving:1 contact:1 occurs:2 exhibit:7 thank:1 lateral:27 cellular:2 sigma:1 rise:2 collective:21 upper:1 observation:1 neuron:2 howard:2 meinhardt:2 neurobiology:1 interacting:2 somatic:3 station:1 arbitrary:1 morphogenesis:1 inferred:1 dog:4 nonlinearly:1 connection:27 established:2 trans:1 address:2 adult:1 below:1 pattern:26 dynamical:3 program:2 including:1 terry:1 ia:1 solvable:1 eye:1 temporally:1 picture:1 coupled:1 acknowledgement:1 mountcastle:1 relative:1 afosr:1 interesting:1 limitation:2 leif:1 localized:6 foundation:1 uncorrelated:1 cd:1 excitatory:16 supported:2 dis:1 institute:2 emerge:2 curve:1 cortical:8 quantum:1 sensory:1 author:1 ec:1 bb:1 preferred:2 incoming:1 modularity:16 why:1 ca:1 symmetry:7 dendrite:1 unavailable:1 ioo:3 complex:2 necessarily:1 kamil:1 domain:1 did:1 linearly:1 whole:2 noise:1 neuronal:2 elaborate:1 hebb:3 position:1 wish:1 breaking:5 specific:2 offset:1 decay:3 consequent:1 essential:3 exists:1 intrinsic:1 ih:1 columnar:2 electrophysiology:1 likely:3 minicolumns:1 visual:1 bioi:1 identity:1 martha:1 typical:1 total:2 zone:1 internal:13 frontal:1 preparation:1 phenomenon:4 ex:1 |
1,970 | 2,790 | Convergence and Consistency of
Regularized Boosting Algorithms with
Stationary ?-Mixing Observations
Aur?elie C. Lozano
Department of Electrical Engineering
Princeton University
Princeton, NJ 08544
[email protected]
Sanjeev R. Kulkarni
Department of Electrical Engineering
Princeton University
Princeton, NJ 08544
[email protected]
Robert E. Schapire
Department of Computer Science
Princeton University
Princeton, NJ 08544
[email protected]
Abstract
We study the statistical convergence and consistency of regularized
Boosting methods, where the samples are not independent and identically distributed (i.i.d.) but come from empirical processes of stationary
?-mixing sequences. Utilizing a technique that constructs a sequence of
independent blocks close in distribution to the original samples, we prove
the consistency of the composite classifiers resulting from a regularization achieved by restricting the 1-norm of the base classifiers? weights.
When compared to the i.i.d. case, the nature of sampling manifests in the
consistency result only through generalization of the original condition
on the growth of the regularization parameter.
1 Introduction
A significant development in machine learning for classification has been the emergence
of boosting algorithms [1]. Simply put, a boosting algorithm is an iterative procedure that
combines weak prediction rules to produce a composite classifier, the idea being that one
can obtain very precise prediction rules by combining rough ones. It was shown in [2] that
AdaBoost, the most popular Boosting algorithm, can be seen as stage-wise fitting of additive models under the exponential loss function and it effectively minimizes an empirical
loss function that differs from the probability of incorrect prediction. From this perspective, boosting can be seen as performing a greedy stage-wise minimization of various loss
functions empirically. The question of whether boosting achieves Bayes-consistency then
arises, since minimizing an empirical loss function does not necessarily imply minimizing
the generalization error. When run a very long time, the AdaBoost algorithm, though resistant to overfitting, is not immune to it [2, 3]. There also exist cases where running Adaboost
forever leads to a prediction error larger than the Bayes error in the limit of infinite sample
size. Consequently, one approach for the study of consistency is to modify the original Adaboost algorithm by imposing some constraints on the weights of the composite classifier
to avoid overfitting. In this regularized version of Adaboost, the 1-norm of the weights of
the base classifiers is restricted to a fixed value. The minimization of the loss function is
performed over the restricted class [4, 5].
In this paper, we examine the convergence and consistency of regularized boosting algorithms with samples that are no longer i.i.d. but come from empirical processes of stationary weakly dependent sequences. A practical motivation for our study of non i.i.d. sampling is that in many learning applications observations are intrinsically temporal and hence
often weakly dependent. Ignoring this dependency could seriously undermine the performance of the learning process (for instance, information related to the time-dependent ordering of samples would be lost). Recognition of this issue has led to several studies of non
i.i.d. sampling [6, 7, 8, 9, 10, 11, 12].
To cope with weak dependence we apply mixing theory which, through its definition of
mixing coefficients, offers a powerful approach to extend results for the traditional i.i.d.
observations to the case of weakly dependent or mixing sequences. We consider the ?mixing coefficients, whose mathematical definition is deferred to Sec. 2.1. Intuitively, they
provide a ?measure? of how fast the dependence between the observations diminishes as
the distance between them increases. If certain conditions on the mixing coefficients are
satisfied to reflect a sufficiently fast decline in the dependence between observations as
their distance grows, counterparts to results for i.i.d. random processes can be established.
A comprehensive review of mixing theory results is provided in [13].
Our principal finding is that consistency of regularized Boosting methods can be established
in the case of non-i.i.d. samples coming from empirical sequences of stationary ?-mixing
sequences. Among the conditions that guarantee consistency, the mixing nature of sampling appears only through a generalization of the one on the growth of the regularization
parameter originally stated for the i.i.d. case [4].
2
2.1
Background and Setup
Mixing Sequences
Let W = (Wi )i?1 be a strictly stationary sequence of random variables, each having the
same distribution P on D ? Rd . Let ?1l = ? (W1 , W2 , . . . , Wl ) be the ?-field generated
?
by W1 , . . . , Wl . Similarly, let ?l+k
= ? (Wl+k , Wl+k+1 , . . . , ) . The following mixing
coefficients characterize how close to independent a sequence W is.
1
Definition 1. For any sequence W , ?
the ?-mixing
?
? coefficient is defined
?by
?
,
?W (n) = supk E sup |P A|?1k ? P (A) | : A ? ?k+n
k
where the expectation is taken w.r.t. ?1 .
Hence ?W (n) quantifies the degree of dependence between ?future? observations and ?past?
ones separated by a distance of at least n. In this study, we will assume that the sequences
1
To gain insight into the notion of ?-mixing, it is useful to think of the ?-field generated by a random variable X as the ?body of information? carried by X. This leads to the following interpretation
of ?-mixing. Suppose that the index i in Wi is the time index. Let A be an event happening in the
future within the period of time between t = k + n and t = ?. |P (A|?1k ) ? P (A)| is the absolute
difference between the probability that event A occurs, given the knowledge of the information generated by the past up to t = k, and the probability of event A occurring without this knowledge. Then,
?
the greater the dependence between ?1k (the information generated by (W1 , . . . , Wk )) and ?k+n
(the
information generated by (Wk+n , . . . , W? )), the larger the coefficient ?W (n).
we consider are algebraically ?-mixing. This property implies that the dependence between
observations decreases fast enough as the distance between them increases.
Definition 2. A sequence W is called ?-mixing if limn?? ?W (n) = 0. Further, it is
algebraically ?-mixing if there is a positive constant r? such that ?W (n) = O (n?r? ) .
The choice of ?-mixing appears appropriate given previous results that showed ?uniform
convergence of empirical means uniformly in probability? and ?probably approximately
correct? properties to be preserved for ?-mixing inputs [11]. Some examples of ?-mixing
sequences that fit naturally in a learning scenario are certain Markov processes and Hidden
Markov Models [11]. In practice, if the mixing properties are unknown, they need to be
estimated. Although it is difficult to find them in general, there exist simple methods to
determine the mixing rates for various classes of random processes (e.g. Gaussian, Markov,
ARMA, ARCH, GARCH). Hence the assumption of a known mixing rate is reasonable and
has been adopted by many studies [6, 7, 8, 9, 10, 12].
2.2 Classification with Stationary ?-Mixing Training Data
In the standard binary classification problem, the training data consist of a set Sn =
{(X1 , Y1 ) , . . . , (Xn , Yn )}, where Xk belongs to some measurable space X , and Yk is
in {?1, 1}. Using Sn , a classifier hn : X ? {?1, 1} is built to predict the label Y of an
unlabeled observation X. Traditionally, the samples are assumed to be i.i.d., and to our
knowledge, this assumption is made by all the studies on boosting consistency. In this paper, we suppose that the sampling is no longer i.i.d. but corresponds to an empirical process
of stationary ?-mixing sequences. More precisely, let D = X ? Y, where Y = {?1, +1}.
Let Wi = (Xi , Yi ). We suppose that W = (Wi )i?1 is a strictly stationary sequence of
random variables, each having the same distribution P on D and that W is ?-mixing (see
Definition 2). This setup is in line with [7]. We assume that the unlabeled observation is
such that (X, Y ) is independent of Sn but with the same marginal.
3 Statistical Convergence and Consistency of Regularized Boosting
for Stationary ?-Mixing Sequences
3.1 Regularized Boosting
We adopt the framework of [4] which we now recall. Let H denote the class of base
classifiers h : X ? {?1, 1}, which usually consists of simple rules (for instance decision
stumps). This class is required to have finite VC-dimension. Call F, the class of functions
f : X ? [?1, 1] obtained as convex combinations of the classifiers in H:
t
t
n
o
X
X
F = f (X) =
?j hj (X) : t ? N, ?1 , . . . , ?t ? 0,
?j = 1, h1 , . . . , ht ? H .
j=1
j=1
(1)
Each fn ? F defines a classifier hfn = sign (fn ) and for simplicity the generalization
errorPL (hfn ) is denoted by L (fn ). Then the training error is denoted by Ln (fn ) =
n
1/n i=1 I[hfn (Xi )6=Yi ] . Define Z (f ) = ?f (X) Y and Zi (f ) = ?f (Xi ) Yi . Instead of
minimizing the indicator of misclassification (I[?f (X)Y >0] ), boosting methods are shown
to effectively minimize a smooth convex cost function of Z(f ). For instance, Adaboost
is based on the exponential function. Consider a positive, differentiable, strictly increasing, and strictly convex function ? : R ? R+ and assume that ? (0) = 1 and
that limx??? ? (x) = 0. The corresponding cost functionPand empirical cost funcn
tion are respectively C (f ) = E? (Z (f )) and Cn (f ) = 1/n i=1 ? (Zi (f )) . Note that
L (f ) ? C (f ), since I[x>0] ? ? (x).
The iterative aspect of boosting methods is ignored to consider only their performing an
(approximate) minimization of the empirical cost function or, as we shall see, a series of
cost functions. To avoid overfitting, the following regularization procedure is developed for
the choice of the cost functions. Define ?? such that ?? > 0 ?? (x) = P
? (?x) . The corn
responding empirical and expected cost functions become Cn? (f ) = n1 i=1 ?? (Zi (f ))
?
and C (f ) = E?? (Z (f )) . The minimization of a series of cost functions C ? over the
convex hull of H is then analyzed.
3.2 Statistical Convergence
The nature of the sampling intervenes in the following two lemmas that relate the empirical
cost Cn? (f ) and true cost C ? (f ).
Lemma 1. Suppose that for any n, the training data (X1 , Y1 ) , . . . (Xn , Yn ) comes from
a stationary algebraically ?-mixing sequence with ?-mixing coefficients ? (m) satisfying
? (m) = O (m?r? ), m ? N and r? a positive constant. Then for any ? > 0 and b ? [0, 1),
?
2 ?
c1
1
E sup |C ? (f ) ? Cn? (f ) | ? 4??0 (?) (1?b)/2 + 2? (?) b(1+r )?1 + 1?b . (2)
?
n
n
n
f ?F
Lemma 2. Let the training data be as in Lemma 1. For any b ? [0, 1), and ? ? (0, 1 ? b),
let ?n = 3(2c1 + n?/2 )??0 (?)/n(1?b)/2 . Then for any ? > 0
?
?
P sup |C ? (f ) ? Cn? (f ) | > ?n ? exp(?4c2 n? ) + O(n1?b(r? +1) ).
(3)
f ?F
The constants c1 and c2 in the above lemmas are given in the proofs of Lemma 1 (Section 4.2) and Lemma 2 (Section 4.3) respectively.
3.3 Consistency Result
The following summarizes the assumptions that are made to prove consistency.
Assumption 1.
I- Properties of the sample sequence: The samples (X1 , Y1 ) , . . . , (Xn , Yn ) are assumed
to come from a stationary algebraically ?-mixing sequence with ?-mixing coefficients
?X,Y (n) = O (n?r? ), r? being a positive constant.
II- Properties of the cost function ?: ? is assumed to be a differentiable, strictly convex,
strictly increasing cost function such that ? (0) = 1 and limx??? ? (x) = 0.
III- Properties of the base hypothesis space: H has finite VC dimension. The distribution of (X, Y ) and the class H are such that lim??? inf f ??F C (f ) = C ? , where
?F = {?f : f ? F} and C ? = inf C (f ) over all measurable functions f : X ? R.
IV- Properties of the smoothing parameter: We assume that ?1 , ?2 , . . . is a sequence
of positive
satisfying ?n ? ? as n ? ?, and that there exists a constant
?
? 1 numbers
,
1
such
c ? 1+r
that
?n ?0 (?n ) /n(1?c)/2 ? 0 as n ? ?.
?
Call f?n? the function in F which approximatively minimizes Cn? (f ), i.e. f?n? is such that
Pn
Cn? (f?n? ) ? inf f ?F Cn? (f ) + ?n = inf f ?F n1 i=1 ?? (Zi (f )) + ?n , with ?n ? 0 as
n ? ?. The main result is the following.
Theorem 1. Consistency of regularized boosting methods for stationary ?-mixing sequences. Let fn = f?n?n ? F, where f?n?n (approximatively) minimizes Cn?n (f ) . Under Assumption 1, limn?? L (hfn = sign (fn )) = L? almost surely and hfn is strongly
Bayes-risk consistent.
Cost functions satisfying Assumption 1.II include the exponential function and the logit
function log2 (1 + ex ). Regarding Assumption 1.II, the reader is referred to [4](Remark on
(denseness assumption)). In Assumption 1.IV, notice that the nature of sampling leads to
a generalization of the condition on the growth of ?n ?0 (?n ) already present in the i.i.d.
setting [4]. More precisely, the nature of sampling manifests through parameter c, which is
limited by r? . The assumption that r? is known is quite strict but cannot be avoided (for
instance this assumption is widely made in the field of time series analysis). On a positive
note, if unknown, r? can be determined for various classes of processes as mentioned
Section 2.1.
4 Proofs
4.1 Preparation to the Proofs: the Blocking Technique
The key issue resides in upper bounding
n
?
?
X
? ?
?
?
?
?
?
?
sup Cn (f ) ? C (f ) = sup ?1/n
? (??f (Xi ) Yi ) ? E? (??f (X1 ) Y1 ) ?,
f ?F
f ?F
(4)
i=1
where F is given by (1). Let W = (X, Y ), Wi = (Xi , Yi ). Define the function g? by
g? (W ) = g? (X, Y ) = ? (??f (X) Y ) and the class G? by G? = {g? : g? (X, Y ) =
? (??f (X) Y ) , f ? F} . Then (4) can be rewritten
as
?
?
? ?
?
? ?1 Pn
?
?
?
?
supf ?F Cn (f ) ? C (f ) = supg? ?G? ?n
g
(W
)
?
Eg
(W
)
?.
?
i
?
1
i=1
Note that the class G? is uniformly bounded by ? (?). Besides, if H is a class of measurable
functions, then G? is also a class of measurable functions, by measurability of F.
As the Wi ?s are not i.i.d, we propose to use the blocking technique developed in [12, 14] to
construct i.i.d blocks of observations which are close in distribution to the original sequence
W1 , . . . , Wn . This enables us to work on the sequence of independent blocks instead of the
original sequence. We use the same notation as in [12]. The protocol is the following. Let
(bn , ?n ) be a pair of integers, such that
(n ? 2bn ) ? 2bn ?n ? n.
(5)
Divide the segment W1 = (X1 , Y1 ) , . . . , Wn = (Xn , Yn ) of the mixing sequence into
2?n blocks of size bn , followed by a remaining block (of size at most 2bn ). Consider the odd blocks only. If their size bn is large enough, the dependence between
them is weak, since two odd blocks are separated by an even block of the same size
bn . Therefore, the odd blocks can be approximated by a sequence of independent blocks
with the same within-block structure. The
the even blocks.
? same holds if we consider
?
Let (?1 , . .?. , ?bn ) , (?bn +1 , . . . , ??2bn ) , ?. . . , ?(2?n ?1)bn , . . . , ?2??n bn be independent blocks
such that ?jbn +1 , . . . , ?(j+1)bn =D Wjbn +1 , . . . , W(j+1)bn , for j = 0, . . . , ?n ? 1.
For j = 1, . . . , 2?n , and any g ? G? , define
Pjbn
Pjbn
g (?i ) ? bn Eg (?1 ) , Z?j,g := i=(j?1)b
g (Wi ) ? bn Eg (W1 ) .
Zj,g := i=(j?1)b
n +1
n +1
Let O?n = {1, 3, . . . , 2?n ? 1} and ?E?n = {2, 4, . ?. . , 2?n }.
Define Zi,j (f ) as Zi,j (f ) := ?f ?(2j?2)bn +i,1 ? ?(2j?2)bn +i,2 , where ?k,1 and ?k,2
are respectively the 1st and 2nd coordinate of the vector ?k . These correspond to the
Zk (f ) = ?f (Xk ) Yk for k in the odd blocks 1, ..., bn , 2bn + 1, ..., 3bn, ....
4.2 Proof sketch of Lemma 1
A. Working with Independent Blocks. We show that
n
?
?1 X
?1 X
?
?
2bn ?
?
?
?
?
g (Wi )?Eg (W1 ) ? ? 2E sup ?
Zj,g ?+? (?) ?n ?W (bn )+
.
E sup ?
n
g?G? n
g?G? n i=1
j?O?n
(6)
Proof. Without
of generality,
Eg (W1 ) = Eg (?1 ) = 0.
? assume ?that?P
? loss
??
P
?
?1
? 1 Pn
?
?
?
Then, E supg ? n i=1 g (Wi ) ? = E supg ? n
O?n Zj,g +
E?n Zj,g + R ?, where R
is the remainder term consisting of a sumPof at most 2bn terms. Noting
P that ?g ?
n
G? , |g| ? ? (?), it follows that E supg | n1 i=1 g (Wi ) | ? E(supg | n1 O?n Z?j,g |) +
P
Z?j,g |) + ?(?)(2bn ) . We use the following intermediary lemma.
E(supg | 1
n
E?n
n
Lemma 3 (adapted from [15], Lemma 4.1). Call Q the distribue the distribution of
tion of (W1 , . . . , Wbn , W2bn +1 , . . . , W3bn , . . .) and Q
(?1 , . . . , ?bn , ?2bn +1 , . . . , ?3bn , . . .). For any measurable function h on Rbn ?n with
e (?1 , . . .) | ? H (?n ? 1) ?W (bn ) . The same result holds
bound H, |Qh (W1 , . . .) ? Qh
for (Wbn +1 , . . . , W2bn , W3bn +1 , . . . , W4bn . . .).
P
P
Using this with h(W1 , . . .) = supg | n1 O?n Z?j,g | and h(Wbn+1 , . . .) = supg | n1 E?n Z?j,g |
P
respectively, and noting that H = ? (?) /2, we have E supg | n1 ni=1 g (Wi ) | ?
E supg | n1
P
O?n
Zj,g | + ?(?)
?n ?W (bn ) + E supg | n1
2
P
E?n
n)
Zj,g | + ?(?)
?n ?W (bn ) + ?(?)(2b
.
2
n
As the Zj,g ?s from odd and even blocks have the same distribution, we obtain (6).
t
u
B. Symmetrization. The odd blocks Zj,g ?s being independent, we can use the standard
0
0
symmetrization techniques. Let Zj,g
?s be i.i.d. copies of the Zj,g ?s. Let Zi,j
(f )?s be the
corresponding copies of the Zi,j (f ). Let (?i ) be a Rademacher sequence, i.e. a sequence
of independent random variables taking the values ?1 with probability 1/2. Then by [16],
Lemma 6.3 (Proof is omitted due to space constraints), we have
?1 X
?
?1 X
?
? ??
?
?
?
0
Zj,g ? ? E sup ?
?j Zj,g ? Zj,g
E sup ?
(7)
?.
n
n
g
g
j?O?m
j?O?n
C. Contraction Principle. We now show that
?n
?
?
?1 X
?1 X
?
?
?
?
Zj,g ? ? 2 ? bn ??0 (?) E sup ?
?j Z1,j (f )?.
E sup ?
g?G? n
f ?F n j=1
(8)
j?O?n
Pbn
Proof. As Zj,g = i=1
? (Z (f )), and the Zi,j (f )?s and Z 0 i,j (f )?s are i.i.d., with (7)
?1 P
? ? i,j ? 1 P?n
? 0
?? ?
Pbn ?
?
?
?? (Zi,j (f )) ? ?? Zi,j
(f ) ? ?
E supg n j?O?n Zj,g ? E supg ? n j=1 ?j i=1
?
? P?n
?j (?? (Z1,j (f ))?1)?. By applying the ?Comparison Theorem?, The2bn E supg ?n1 j=1
orem 7 in [17], to the contraction ? (x) = (1/??0 (?)) (?? (x) ? 1), we obtain (8).
t
u
D. Maximal Inequality. We show that there exists a constant c1 > 0 such that
?n
? c ??
?1 X
1
n
?
?
?j Z1,j (f )? ?
.
(9)
E sup ?
n
f ?F n j=1
P?n
1
Proof. Denote (h1 , . . . , hN ) by hN
1 . One can write E supf ?F | n
j=1 ?j Z1,j (f )| =
?
?
P?n PN
1
| j=1 k=1 ?k ?j ?(1,j),2 hk ?(2j?2)bn +1,1 |. Since
N sup? ,...,?
n E supN ?1 suphN
1
N
1 ?H
?(2j?2)bn +1,2 and ?(2j 0 ?2)bn +1,2 are i.i.d. for? all j 6= j 0 (they come
?? blocks),
? from different
and (?j ) is a Rademacher sequence, then ?j ?(2j?2)bn +1,2 hk ?(2j?2)bn +1,1 j=1,...,?
n
?
?
??
has the same distribution as ?j hk ?(2j?2)bn +1,1 j=1,...,? . Hence
n
?
?X
?
? X
N
?
? ?n X
? 1 ?n
?
??
1
?j Z1,j (f )?? = E sup sup
sup ??
?j ?k hk ?(2j?2)bn +1,1 ??.
E sup ??
n N ?1 hN ?HN ?1 ,...,?N j=1
f ?F n j=1
1
k=1
By the same argument as used in [4], p.53 on the maximum of a linear function over
a convex polygon, the supremum is achieved when ?k = 1 for some k. Hence we get
?
? P
?
?
?n
?j Z1,j (f )? =
E supf ?F ? n1 j=1
?P
?
? ??
? ?n
? j=1 ?j h ?(1,j),1 ?. Noting that for all
j 6= j 0 , h(?(2j?2)bn +1,1 ) and h(?(2j 0 ?2)bn +1,1 ) are i.i.d. and that Rademacher processes
are sub-gaussian, we have by [18], Corollary 2.2.8
1
n E suph?H
? ?n
?
?X
?
??
1
E sup ??
?j h ?(2j?2)bn +1,1 ??
n h?H j=1
?
?
? ?n
?
?X
?
??
1
E sup ??
?j h ?(2j?2)bn +1,1 ??
n h?H?{0} j=1
? Z
c 0 ?n ?
(log sup N (?, ?2,Pn , H ? {0}))1/2 d?,
n
P
0
where c0 is a constant and N (?, ?2,Pn , H ? {0}) is the empirical L2 covering number.
As H has finite VC-dimension (see Assumption 1.III), there exists a positive constant
w such that supP N (?, ?2,Pn , H ? {0}) = OP (??w )(see [18], Theorem 2.6.1). Hence
R?
1/2
(log supPn N (?, ?2,Pn , H ? {0})) d? < ?. and (9) follows.
t
u
0
E. Establishing
? P (2). Combining (6),(8),? and (9), we have?
?
?
?
c ?
n
E supg?G? ? n1 i=1 g (Wi ) ? Eg (W1 )? ? 4bn ??0 (?) 1 n n + ? (?) ?n ?W (bn )+
b
2bn
n
?
.
1?b
Take bn = n , with 0 ? b < 1. By (5), we obtain ?n ? n /2. Besides, as we assumed
?r?
that the sequence W is? algebraically
).
? ?-mixing (see Definition 2), ?W (n) = O (n
1?b(1+r? )
Then ?n ?W (bn ) = O n
, and we arrive at (2).
4.3 Proof Sketch of Lemma 2
A. Working with Independent Blocks and Symmetrization. For any b ? [0, 1), ? ?
(0, 1 ? b), let
?n = 3(2c1 + n?/2 )??0 (?)/n(1?b)/2 .
(10)
We
n
?
?1 X
?
?1 X
?
?
? show
?
?
?
?
?
g (Wi )?Eg (W1 )? > ?n ? 2P sup ?
Zj,g ? > ?n /3 +O(n1?b(1+r? ) ).
P sup ?
g?G? n
g?G? n i=1
j?O?n
(11)
Proof.
By
[12],
Lemma
3.1,
we
have
that
for
any
?
such
that
?(?)b
=
o(n?
n
n
?
? n ),
? P
? P
?
?
?
?
?
?1
?1 n
P supg?G? ? n i=1 g (Wi ) ? Eg (W1 )? > ?n ? 2P supg?G? ? n j?O?n Zj,g ? >
?
?n /3 + 4?n ?W (bn ). Set bn = nb , with 0 ? b < 1. Then ?n ?W (bn ) = O(n1?b(1+r? ) )
(for the same reasons as in Section 4.2 E.). With ?n as in (10), and since Assumption 1.II
implies that ??0 (?) ? ?(?) ? 1, we automatically obtain ?(?)bn = o(n?n ).
t
u
B. McDiarmid?s Bounded Difference Inequality. For ?n as in (10), there exists a constant
c2 > 0 such that,
?1 X
?
?
?
?
?
Zj,g ? > ?n /3 ? exp(?4c2 n? ).
P sup ?
(12)
g?G? n
j?O?n
Proof. The Zj,g ?s of the odd block being independent, we can apply McDiarmid?s
P bounded
difference inequality ([19], Theorem 9.2 p.136) on the function supg?G?| n1 j?O?n Zj,g |
which depends of Z1,g , Z3,g . . . , Z2?n ?1,g . Noting that changing the value of one variable
does not change the value of the function by more that bn ? (?) /n,we obtain with bn = nb
that
?
?
? for all ? ?> 0,
?
? 2 1?b ?
? P
? P
?
?
n
.
P supg?G? ? n1 j?O?n Zj,g ? > E supg?G? ? n1 j?O?n Zj,g ? + ? ? exp ?4?
?(?)2
b
Combining? (8) and (9) from
? the proof of Lemma 1, and with bn = n , we have
?1 P
?
E supg?G? ? n j?O?n Zj,g ? ? 2??0 (?) C/n(1?b)/2 . With ? = n?/2 ??0 (?)/n(1?b)/2 , we
obtain ?n as in (10). Pick ?0 such that 0 < ?0 < ?. Then, since ??0 (?) ? ?(?) ? 1, (12)
follows with c2 = (1 ? 1/?(?0 ))2 .
t
u
C. Establishing (3). Combining (11) and (12) we obtain (3).
4.4 Proof Sketch of Theorem 1
Let f?? a function in F minimizing C ? . With fn = f?n?n , we have
C (?n fn ) ? C ? = (C ?n (f?n?n ) ? C ?n (f??n )) + (inf f ??n F C(f ) ? C ? ).
Since ?n ? ?, the second term on the right-hand side converges
to zero by Assump?
?
tion 1.III. By [19], Lemma 8.2, we have C ?n (f?n?n ) ? C ?n f??n ? 2 supf ?F |C ?n (f ) ?
Cn?n (f ) |. By Lemma 2, supf ?F |C ?n (f ) ? Cn?n (f ) | ? 0 with probability 1 if, as
n ? ?, ?n ?0 (?n ) n(?+b?1)/2 ? 0 and b > 1/(1 + r? ). Hence if Assumption 1.IV
holds, C (?n fn ) ? C ? with probability 1. By [4], Lemma 5, the theorem follows.
References
[1] Schapire, R.E.: The Boosting Approach to Machine Learning An Overview. In Proc. of the MSRI
Workshop on Nonlinear Estimation and Classification (2002)
[2] Friedman, J., Hastie T., Tibshirani, R.: Additive logistic regression: A statistical view of boosting. Ann. Statist. 38 (2000) 337?374
[3] Jiang, W.: Does Boosting Overfit:Views From an Exact Solution. Technical Report 00-03 Department of Statistics, Northwestern University (2000)
[4] Lugosi, G., Vayatis, N.: On the Bayes-risk consistency of boosting methods. Ann. Statist. 32
(2004) 30?55
[5] Zhang, T.: Statistical Behavior and Consistency of Classification Methods based on Convex Risk
Minimization. Ann. Statist. 32 (2004) 56?85
[6] Gy?orfi, L., H?ardle, W., Sarda, P., and Vieu, P.: Nonparametric Curve Estimation from Time
Series. Lecture Notes in Statistics. Springer-Verlag, Berlin. (1989)
[7] Irle, A.: On the consistency in nonparametric estimation under mixing assumptions. J. Multivariate Anal. 60 (1997) 123?147
[8] Meir, R.: Nonparametric Time Series Prediction Through Adaptative Model Selection. Machine
Learning 39 (2000) 5?34
[9] Modha, D., Masry, E.: Memory-Universal Prediction of Stationary Random Processes. IEEE
Trans. Inform. Theory 44 (1998) 117?133
[10] Roussas, G.G.: Nonparametric estimation in mixing sequences of random variables. J. Statist.
Plan. Inference. 18 (1988) 135?149
[11] Vidyasagar, M.: A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems. Second Edition. Springer-Verlag, London (2002)
[12] Yu, B.: Density estimation in the L? norm for dependent data with applications. Ann. Statist.
21 (1993) 711?735
[13] Doukhan, P.: Mixing Properties and Examples. Springer-Verlag, New York (1995)
[14] Yu, B.: Some Results on Empirical Processes and Stochastic Complexity. Ph.D. Thesis, Dept
of Statistics, U.C. Berkeley (Apr. 1990)
[15] Yu, B.: Rate of convergence for empirical processes of stationary mixing sequences. Ann.
Probab. 22 (1994) 94?116.
[16] Ledoux, M., Talagrand, N.: Probability in Banach Spaces. Springer, New York (1991)
[17] Meir, R., Zhang, T.:Generalization error bounds for Bayesian mixture algorithms. J. Machine
Learning Research (2003)
[18] van der Vaart, A.W., Wellner, J.A.: Weak convergence and empirical processes. Springer Series
in Statistics. Springer-Verlag, New York (1996)
[19] Devroye, L., Gy?orfi L., Lugosi, G.: A Probabilistic Theory of Pattern Recognition. Springer,
New York (1996)
| 2790 |@word version:1 norm:3 logit:1 nd:1 distribue:1 c0:1 bn:55 contraction:2 pick:1 series:6 seriously:1 past:2 z2:1 fn:9 additive:2 enables:1 stationary:14 greedy:1 xk:2 boosting:19 mcdiarmid:2 zhang:2 mathematical:1 c2:5 become:1 incorrect:1 prove:2 consists:1 combine:1 fitting:1 expected:1 behavior:1 examine:1 automatically:1 increasing:2 provided:1 bounded:3 notation:1 minimizes:3 developed:2 finding:1 nj:3 guarantee:1 temporal:1 berkeley:1 growth:3 classifier:9 control:1 masry:1 yn:4 positive:7 engineering:2 modify:1 limit:1 jiang:1 establishing:2 modha:1 approximately:1 lugosi:2 vieu:1 limited:1 doukhan:1 elie:1 practical:1 lost:1 block:20 practice:1 differs:1 procedure:2 universal:1 empirical:15 orfi:2 composite:3 get:1 cannot:1 close:3 unlabeled:2 selection:1 put:1 risk:3 applying:1 nb:2 measurable:5 convex:7 simplicity:1 rule:3 insight:1 utilizing:1 notion:1 traditionally:1 coordinate:1 qh:2 suppose:4 exact:1 hypothesis:1 recognition:2 satisfying:3 approximated:1 blocking:2 electrical:2 ordering:1 decrease:1 yk:2 mentioned:1 complexity:1 weakly:3 segment:1 various:3 polygon:1 separated:2 fast:3 london:1 whose:1 quite:1 larger:2 widely:1 statistic:4 vaart:1 think:1 emergence:1 sequence:32 differentiable:2 ledoux:1 propose:1 coming:1 maximal:1 remainder:1 combining:4 mixing:39 convergence:8 rademacher:3 produce:1 converges:1 op:1 odd:7 c:1 come:5 implies:2 correct:1 hull:1 vc:3 stochastic:1 pbn:2 generalization:7 ardle:1 strictly:6 hold:3 sufficiently:1 exp:3 predict:1 achieves:1 adopt:1 omitted:1 estimation:5 diminishes:1 intermediary:1 proc:1 label:1 symmetrization:3 wl:4 minimization:5 rough:1 gaussian:2 avoid:2 hj:1 pn:8 corollary:1 hk:4 inference:1 dependent:5 hidden:1 issue:2 classification:5 among:1 denoted:2 development:1 plan:1 smoothing:1 marginal:1 field:3 construct:2 having:2 sampling:8 yu:3 future:2 report:1 comprehensive:1 consisting:1 n1:18 friedman:1 limx:2 deferred:1 analyzed:1 mixture:1 iv:3 divide:1 arma:1 instance:4 cost:13 uniform:1 characterize:1 dependency:1 st:1 density:1 aur:1 probabilistic:1 sanjeev:1 w1:14 thesis:1 reflect:1 satisfied:1 intervenes:1 hn:5 supp:1 stump:1 gy:2 sec:1 wk:2 coefficient:8 depends:1 supg:21 performed:1 h1:2 tion:3 view:2 sup:23 bayes:4 minimize:1 ni:1 correspond:1 weak:4 bayesian:1 inform:1 definition:6 naturally:1 proof:13 gain:1 popular:1 intrinsically:1 adaptative:1 manifest:2 knowledge:3 recall:1 lim:1 appears:2 originally:1 adaboost:6 though:1 strongly:1 generality:1 stage:2 arch:1 overfit:1 sketch:3 undermine:1 working:2 hand:1 talagrand:1 nonlinear:1 defines:1 logistic:1 measurability:1 roussas:1 grows:1 irle:1 true:1 counterpart:1 lozano:1 regularization:4 hence:7 eg:9 covering:1 wise:2 empirically:1 overview:1 banach:1 extend:1 interpretation:1 significant:1 imposing:1 rd:1 consistency:17 similarly:1 immune:1 resistant:1 longer:2 base:4 multivariate:1 showed:1 perspective:1 belongs:1 inf:5 scenario:1 certain:2 verlag:4 inequality:3 binary:1 yi:5 der:1 garch:1 seen:2 greater:1 surely:1 algebraically:5 determine:1 period:1 ii:4 smooth:1 w3bn:2 technical:1 offer:1 long:1 prediction:6 rbn:1 regression:1 expectation:1 hfn:5 achieved:2 c1:5 preserved:1 background:1 vayatis:1 limn:2 w2:1 probably:1 strict:1 call:3 integer:1 noting:4 iii:3 identically:1 enough:2 wn:2 sarda:1 fit:1 zi:11 hastie:1 idea:1 decline:1 cn:13 regarding:1 whether:1 wellner:1 york:4 remark:1 ignored:1 useful:1 nonparametric:4 statist:5 ph:1 schapire:3 meir:2 exist:2 zj:24 notice:1 sign:2 estimated:1 msri:1 tibshirani:1 write:1 shall:1 key:1 changing:1 ht:1 run:1 powerful:1 arrive:1 almost:1 reasonable:1 reader:1 decision:1 summarizes:1 bound:2 followed:1 adapted:1 constraint:2 precisely:2 aspect:1 argument:1 performing:2 corn:1 department:4 combination:1 wi:14 intuitively:1 restricted:2 taken:1 ln:1 adopted:1 rewritten:1 apply:2 appropriate:1 original:5 responding:1 running:1 include:1 remaining:1 log2:1 question:1 already:1 occurs:1 dependence:7 traditional:1 supn:1 distance:4 berlin:1 reason:1 besides:2 devroye:1 index:2 z3:1 minimizing:4 setup:2 difficult:1 robert:1 relate:1 stated:1 anal:1 unknown:2 upper:1 observation:10 markov:3 finite:3 precise:1 y1:5 pair:1 required:1 z1:7 established:2 trans:1 usually:1 pattern:1 orem:1 built:1 memory:1 vidyasagar:1 event:3 misclassification:1 regularized:8 indicator:1 imply:1 carried:1 sn:3 review:1 probab:1 l2:1 loss:6 lecture:1 northwestern:1 suph:1 degree:1 consistent:1 principle:1 copy:2 denseness:1 side:1 taking:1 absolute:1 distributed:1 van:1 curve:1 dimension:3 xn:4 resides:1 made:3 avoided:1 cope:1 approximate:1 forever:1 supremum:1 overfitting:3 assumed:4 xi:5 iterative:2 quantifies:1 nature:5 zk:1 ignoring:1 necessarily:1 protocol:1 apr:1 main:1 motivation:1 bounding:1 edition:1 body:1 x1:5 referred:1 sub:1 exponential:3 theorem:6 consist:1 exists:4 workshop:1 restricting:1 kulkarni:2 effectively:2 occurring:1 supf:5 led:1 simply:1 assump:1 happening:1 supk:1 approximatively:2 springer:7 corresponds:1 consequently:1 ann:5 change:1 infinite:1 determined:1 uniformly:2 principal:1 lemma:18 called:1 arises:1 preparation:1 dept:1 princeton:9 ex:1 |
1,971 | 2,791 | Gaussian Processes for Multiuser Detection in
CDMA receivers
Juan Jos?e Murillo-Fuentes, Sebastian Caro
Dept. Signal Theory and Communications
University of Seville
{murillo,scaro}@us.es
Fernando P?erez-Cruz
Gatsby Computational Neuroscience
University College London
[email protected]
Abstract
In this paper we propose a new receiver for digital communications. We
focus on the application of Gaussian Processes (GPs) to the multiuser
detection (MUD) in code division multiple access (CDMA) systems to
solve the near-far problem. Hence, we aim to reduce the interference
from other users sharing the same frequency band. While usual approaches minimize the mean square error (MMSE) to linearly retrieve
the user of interest, we exploit the same criteria but in the design of a
nonlinear MUD. Since the optimal solution is known to be nonlinear, the
performance of this novel method clearly improves that of the MMSE detectors. Furthermore, the GP based MUD achieves excellent interference
suppression even for short training sequences. We also include some experiments to illustrate that other nonlinear detectors such as those based
on Support Vector Machines (SVMs) exhibit a worse performance.
1
Introduction
One of the major issues in present wireless communications is how users share the resources. And particularly, how they access to a common frequency band. Code division
multiple access (CDMA) is one of the techniques exploited in third generation communications systems and is to be employed in the next generation. In CDMA each user uses direct
sequence spread spectrum (DS-SS) to modulate its bits with an assigned code, spreading
them over the entire frequency band. While typical receivers deal only with interferences
and noise intrinsic to the channel (i.e. Inter-Symbolic Interference, intermodulation products, spurious frequencies, and thermal noise), in CDMA we also have interference produced by other users accessing the channel at the same time. Interference limitation due to
the simultaneous access of multiple users systems has been the stimulus to the development
of a powerful family of Signal Processing techniques, namely Multiuser Detection (MUD).
These techniques have been extensively applied to CDMA systems. Thus, most of last generation digital communication systems such as Global Positioning System (GPS), wireless
802.11b, Universal Mobile Telecommunication System (UMTS), etc, may take advantage
of any improvement on this topic.
In CDMA, we face the retrieval of a given user, the user of interest (UOI), with the knowledge of its associated code or even the whole set of users codes. Hence, we face the
suppression of interference due to others users. If all users transmit with the same power,
bt(1)
bt(2)
bt(K)
?M
h1(z)
Channel
?M
h2(z)
.
.
.
.
.
.
?M
hK(z)
?
C(z)
Noise
nt
?
MUD
Chip rate
sampler
Code filters
Figure 1: Synchronous CDMA system
but the UOI is far from the receiver, most users reach the receiver with a larger amplitude,
making it more difficult to detect the bits of the UOI. This is well-known as the near-far
problem. Simple detectors can be designed by minimizing the mean square error (MMSE)
to linearly retrieve the user of interest [5]. However, these detectors need large sequences
of training data. Besides, the optimal solution is known to be nonlinear.
There has been several attempts to solve the problem using nonlinear techniques. There are
solutions based on Neural Networks such as multilayer perceptron or radial basis functions
[1, 3], but training times are long and unpredictable. Recently, support vector machines
(SVM) have been also applied to CDMA MUD [4]. This solution need very long training
sequences (a few hundreds bits) and they are only tested in toy examples with very few
users and short spreading sequences (the code for each user). In this paper, we will present
a multiuser detector based on Gaussian Processes [7]. The MUD detector is inspired by
the linear MMSE criteria, which can be interpreted as a Bayesian linear regressor. In this
sense, we can extend the linear MMSE criteria to nonlinear decision functions using the
same ideas developed in [6] to present Gaussian Processes for regression.
The rest of the paper is organised as follows. In Section 2, we present the multiuser detection problem in CDMA communication systems and the widely used minimum mean
square error receiver. We propose a nonlinear receiver based on Gaussian Processes in Section 3. Section 4 is devoted to show, through computer experiments, the advantages of the
GP-MUD receiver with short training sequences. We compare it to the linear MMSE and
the nonlinear SVM MUD. We conclude the paper in Section 5 presenting some remarks
and future work.
2
CDMA Communication System Model and MUD
Consider a synchronous CDMA digital communication system [5] as depicted in Figure
1. Its main goal is to share the channel between different users, discriminating between
them by different assigned codes. Each transmitted bit is upsampled and multiplied by
the users? spreading codes and then the chips for each bit are transmitted into the channel
(each element of the spreading code is either +1 or ?1 and they are known as chips). The
channel is assumed to be linear and noisy, therefore the chips from different users are added
together, plus Gaussian noise. Hence, the MUD has to recover from these chips the bits
corresponding to each user. At each time step t, the signal in the receiver can be represented
in matrix notation as:
xt = HAbt + nt
(1)
where bt is a column vector that contains the bits (+1 or ?1) for the K users at time k.
The K ? K diagonal matrix A contains the amplitude of each user, which represents the
attenuation that each user?s transmission suffers through the channel (this attenuation depends on the distance between the user and the receiver). H is an L ? K matrix which
contains in each column the L-dimensional spreading code for each of the K users. The
spreading codes are designed to present a low cross-correlation between them and between
any shifted version of the codes, to guarantee that the bits from each user can be readily
recovered. The codes are known as spreading sequences, because they augment the occupied bandwidth of the transmitted signal by L. Finally, xt represents the L received chips
to which Gaussian noise has been added, which is denoted by nt .
At reception, we aim to estimate the original transmitted symbols of any user i, bt (i),
hereafter the user of interest. Linear MUDs estimate these bits as
?t (i) = sgn{w? xt }
b
i
(2)
The matched filter (MF) wi = hi , a simple correlation between xt and the ith spreading code, is the optimal receiver if there were no additional users in the system, i.e. the
received signal is only corrupted by Gaussian noise. The near-far problem arises when remaining users, apart from the UOI, are received with significantly higher amplitude. While
the optimal solution is known to be nonlinear [5], some linear receivers such as the minimum mean square error (MMSE) present good performances and are used in practice. The
MMSE receiver for the ith user solves:
wi? = arg min E (bt (i) ? wi? xt )2 = arg min E (bt (i) ? wi? (HAbt + ? k ))2 (3)
wi
wi
where wi represents the decision function of the linear classifier. We can derive the MMSE
receiver by taking derivatives with respect to wi and equating to zero, obtaining:
wiM M SEde = R?1
xx hi
(4)
where Rxx = E[xt x?
t ] is the correlation between the received vectors and hi represents the
spreading sequence of the UOI. This receiver is known as the decentralized MMSE receiver
as it can be implemented without knowing the spreading sequences of the remaining users.
Its main limitation is its performance, which is very low even for high signal to noise ratio,
and it needs many examples (thousands) before it can recover the received symbols.
If the spreading codes of all the users are available, as in the base station, this information
can be used to improve the performance of the MMSE detector. We can define z k =
H ? xt , which is a vector of sufficient statistics for this problem [5]. The vector z k is
the matched-filter output for each user and it reduces the dimensionality of our problem
from the number of chips L to the number of users K, which is significantly lower in most
applications. In this case the receiver is known as the centralized detector and it is defined
as:
?
wiM M SEcent = HR?1
(5)
zz H hi
where Rzz = E[z t z ?
t ] is the correlation matrix of the received chips after the MFs.
These MUDs have good convergence properties and do not need a training sequence to
decode the received bits, but they need large training sequences before their probability of
error is low. Therefore the initially received bits will present a very high probability of
error that will make impossible to send any information on them. Some improvements can
be achieved by using higher order statistics [2], but still the training sequences are not short
enough for most applications.
3
Gaussian Processes for Multiuser Detection
The MMSE detector minimizes the functional in (3), which gives the best linear classifier. As we know, the optimal classifier is nonlinear [5], and the MMSE criteria can be
readily extended to provide nonlinear models by mapping the received chips to a higher
dimensional space. In this case we will need to solve:
(N
)
X
2
?
?
2
wi = arg min
bt (i) ? wi ?(xt ) + ?kwi k
(6)
wi
k=1
in which we have changed the expectation by the empirical mean over a training set and we
have incorporated a regularizer to avoid overfitting. ?(?) represents the nonlinear mapping
of the received chips. The wi that minimizes (6) can be interpreted as the mode of the
parameters in a Bayesian linear regressor, as noted in [6], and since the likelihood and the
prior are both Gaussians, so it will be the posterior. For any received symbol x? , we know
that it will be distributed as a Gaussian with mean:
1
?(x? ) = ?? (x? )A?1 ?? b
(7)
?
and variance
? 2 (x? ) = ?? (x? )A?1 ?(x? )
(8)
where ? = [?(x1 ), ?(x2 ), . . . , ?(xN )]? , b = [b1 (i), b2 (i), . . . , bN (i)]? and A =
?? ? + ?1 I.
In the case the nonlinear mapping is unknown, we can still obtain the mean and variance
for each received sample using the kernel of the transformation, being the mean:
and variance
?(x? ) = k? P?1 b
(9)
? 2 (x? ) = k(x? , x? ) + k? P?1 k
(10)
where k(?, ?) = ?? (?)?(?) is the kernel of the nonlinear transformation, k =
[k(x? , x1 ), k(x? , x2 ), . . . , k(x? , xN )], and
P = ??? + ?I = K + ?I
(11)
where (K)k? = k(xt , x? ). The kernel that we will use in our experiments are:
?[2]
k(xt , x? ) = e?[1] exp(?e?[4] kxt ? x? k2 ) + e?[3] x?
?r,?
t x? + e
(12)
The covariance function in (12) is a good kernel for solving the GP-MUD, because it contains a linear and a nonlinear part. The optimal decision surface for MUD is nonlinear,
unless the spreading codes are orthogonal to each other, and its deviation from the linear
solution depends on how strong the correlations between codes are. In most cases, a linear
detector is very close to the optimal decision surface, as spreading codes are almost orthogonal, and only a minor correction is needed to achieve the optimal decision boundary. In
this sense the proposed GP covariance function is ideal for the problem. The linear part can
mimic the best linear decision boundary and the nonlinear part modifies it, where the linear
explanation is not optimal. Also using a radial basis kernel for the nonlinear part is a good
choice to achieve nonlinear decisions. Because, the received chips form a constellation of
2K clouds of points with Gaussian spread around its centres.
Picturing the receiver as a Gaussian Process for regression, instead of a Regularised Least
Square functional, allows us to either obtain the hyperparameters by maximizing the likelihood or marginalised them out using Monte Carlo techniques, as explained in [6]. For the
BER vs SNR
0
10
?1
10
?2
BER
10
?3
10
?4
10
?2
0
2
4
6
8
10
12
14
SNR(dB)
Figure 2: Bit Error Rate versus Signal to Noise ratio for the MF (?), MMSECentralized (), MMSE-Decentralized (?), SVM-centralized (?), GP-Centralized (?) and
GP-Decentralized (?) with k = 8 users and n = 30 training samples. The powers of the
interfering users is distributed homogeneously between 0 and 30 dB above that of the UOI.
problem at hand speed is a must and we will be using the maximum likelihood hyperparameters.
We have just shown above how we can make predictions in the nonlinear case (9) using
the received symbols from the channel. In an analogy with the MMSE receiver, this will
correspond to the decentralized GP-MUD detector as we will not need to know the other
users? codes to detect the bits sent to us. It is also relevant to notice that we do not need our
spreading code for detection, as the decentralized MMSE detector did. We can also obtain
a centralized GP-MUD detector using as input vectors z t = H ? xt .
4
Experiments
In this section we include the typical evaluation of the performance in a digital communications system, i.e., Bit Error Rate (BER). The test environment is a synchronous CDMA
system in which the users are spread using Gold sequences with spreading factor L = 31
and K = 8 users, which are typical values in CDMA based mobile communication systems. We consider the same amplitude matrix in all experiments. These amplitudes are
random values to achieve an interferer to signal ratio of 30 dB. Hence, the interferers are
30 dB over the UOI. We study the worse scenario and hence we will detect the user which
arrives to the receiver with the lowest amplitude.
We compare the performance of the GP centralized and decentralized MUDs to the performance of the MMSE detectors, the Matched Filter detector and the (centralized) SVMMUD in [4]. The SVM-MUD detector uses a Gaussian kernel and its width is adapted
incorporating knowledge of the noise variance in the channel. We found that this setting
BER vs SNR
0
10
?1
10
?2
BER
10
?3
10
?4
10
?2
0
2
4
6
8
10
12
14
SNR(dB)
Figure 3: Bit Error Rate versus Signal to Noise ratio for the MF (?), MMSECentralized (), MMSE-Decentralized (?), SVM-centralized (?), GP-Centralized (?) and
GP-Decentralized (?) with k = 8 users and n = 80 training samples. The powers of the
interfering users is distributed homogeneously between 0 and 30 dB above that of the UOI.
usually does not perform well for this experimental specification and we have set them using validation. We believe this might be due to either the reduced number of users in their
experiments (2 or 3) or because they used the same amplitude for all the users, so they did
not encounter the near-far problem.
We have included three experiments in which we have defined the number of training experiments equal to 30, 80 and 160. For each training set we have computed the BER for
106 bits. The reported results are mean curves for 50 different trials.
The results in Figure 2 show that the detectors based on GPs are able to reduce the probability of error as the signal to noise ratio in the channel decreases with only 30 samples
in the training sequence. The GP centralized MUD is only 1.5-2dB worse than the best
achievable probability of error, which is obtained in absence of interference (indicated by
the dashed line). The GP decentralized MUD reduces the probability of error as the signal
to noise increases, but it remains between 3-4dB from the optimal performance. The other
detectors are not able to decrease the BER even for a very high signal to noise ratio in the
channel. These figures show that the GP based MUD can outperform the other MUD when
very short training sequences are available.
Figure 3 highlights that the SVM-MUD (centralized) and the MSSE centralized detectors
are able to reduce the BER as the SNR increases, but they are still far from the performance
of the GP-MUD. The centralized GP-MUD basically provides optimal performance as it is
less than 0.3db from the possible achieved BER when there is no interference in the channel. The decentralized GP-MUD outperforms the other two centralized detectors (SVM
and MMSE) since it is able to provide lower BER without needing to know the code of the
remaining users.
BER vs SNR
0
10
?1
10
?2
BER
10
?3
10
?4
10
?2
0
2
4
6
8
10
12
14
SNR(dB)
Figure 4: Bit Error Rate versus Signal to Noise ratio for the MF (?), MMSECentralized (), MMSE-Decentralized (?), SVM-centralized (?), GP-Centralized (?) and
GP-Decentralized (?) with k = 8 users and n = 160 training samples. The powers of the
interfering users is distributed homogeneously between 0 and 30 dB above that of the UOI.
Finally, in Figure 4 we include the results for 160 training samples. In this case, the centralized GP-MUD lies above the optimal BER curve and the decentralized GP-MUD performs
as the SVM-MUD detector. The centralized MMSE detector still presents very high probability of error for high signal to noise ratios and we need over 500 samples to obtain a
performance similar to the centralized GP with 80 samples. For 160 samples the MMSE
decentralized is already able to slightly reduce the bit error rate for very high signal to noise
ratios. But to achieve the performance showed by the decentralized GP-MUD it needs several thousands samples.
5
Conclusions and Further Work
We propose a novel approach based on Gaussian Processes for regression to solve the nearfar problem in CDMA receivers. Since the optimal solution is known to be nonlinear the
Gaussian Processes are able to obtain this nonlinear decision surface with very few training
examples. This is the main advantage of this method as it only requires a few tens training
examples instead of the few hundreds needed by other nonlinear techniques as SVMs.
This will allow its application in real communication systems, as training sequence of 26
samples are typically used in the GSM standard for mobile Telecommunications.
The most relevant result of this paper is the performance shown by the decentralized GPMUD receiver, since it can be directly used over any CDMA system. The decentralized
GP-MUD receiver does not need to know the codes from the other users and does not
require the users to be aligned, as the other methods do. While the other receiver will
degrade its performance if the users are not aligned, the decentralized GP-MUD receiver
will not, providing a more robust solution to the near far problem.
We have presented some preliminary work, which shows that GPs for regression are suitable for the near-far problem in MUD. We have left for further work a more extensive set
of experiments changing other parameters of the system such as: the number of users, the
length of the spreading code, and the interferences with other users. But still, we believe
the reported results are significant since we obtain low bit error rates for training sequences
as short as 30 bits.
Acknowledgements
Fernando P?erez-Cruz is Supported by the Spanish Ministry of Education Postdoctoral Fellowships EX2004-0698. This work has been partially funded by research grants TIC200302602 and TIC2003-03781 by the Spanish Ministry of Education.
References
[1] G. C. Orsak B. Aazhang, B. P. Paris. Neural networks for multiuser detection in codedivision multiple-access communications. IEEE Transactions on Communications,
40:1212?1222, 1992.
[2] Antonio Caama?
no-Fernandez, Rafael Boloix-Tortosa, Javier Ramos, and Juan J.
Murillo-Fuentes. High order statistics in multiuser detection. IEEE Trans. on Man
and Cybernetics C. Accepted for publication, 2004.
[3] U. Mitra and H. V. Poor. Neural network techniques for adaptive multiuser demodulation. IEEE Journal Selected Areas on Communications, 12:14601470, 1994.
[4] L. Hanzo S. Chen, A. K. Samingan. Support vector machine multiuser receiver for
DS-CDMA signals in multipath channels. IEEE Transactions on Neural Network,
12(3):604?611, December 2001.
[5] S. Verd?u. Multiuser Detection. Cambridge University Press, 1998.
[6] C. Williams. Prediction with gaussian processes: From linear regression to linear
prediction and beyond.
[7] Christopher K. I. Williams and Carl Edward Rasmussen. Gaussian processes for regression. In David S. Touretzky, Michael C. Mozer, and Michael E. Hasselmo, editors,
Proc. Conf. Advances in Neural Information Processing Systems, NIPS, volume 8. MIT
Press, 1995.
| 2791 |@word trial:1 version:1 achievable:1 bn:1 covariance:2 contains:4 hereafter:1 mmse:22 multiuser:11 outperforms:1 recovered:1 nt:3 must:1 readily:2 cruz:2 designed:2 v:3 selected:1 ith:2 short:6 provides:1 direct:1 inter:1 inspired:1 unpredictable:1 xx:1 notation:1 matched:3 lowest:1 interpreted:2 minimizes:2 developed:1 transformation:2 guarantee:1 attenuation:2 classifier:3 k2:1 uk:1 grant:1 before:2 mitra:1 reception:1 might:1 plus:1 equating:1 murillo:3 practice:1 area:1 empirical:1 universal:1 significantly:2 radial:2 upsampled:1 symbolic:1 close:1 impossible:1 maximizing:1 send:1 modifies:1 williams:2 retrieve:2 transmit:1 user:52 decode:1 gps:4 us:2 verd:1 regularised:1 carl:1 element:1 particularly:1 cloud:1 thousand:2 decrease:2 accessing:1 environment:1 mozer:1 solving:1 division:2 basis:2 chip:11 represented:1 regularizer:1 london:1 monte:1 larger:1 solve:4 widely:1 s:1 statistic:3 gp:25 noisy:1 sequence:17 advantage:3 kxt:1 ucl:1 propose:3 product:1 relevant:2 aligned:2 achieve:4 gold:1 convergence:1 transmission:1 illustrate:1 derive:1 ac:1 minor:1 received:14 strong:1 edward:1 implemented:1 solves:1 filter:4 sgn:1 education:2 require:1 msse:1 preliminary:1 correction:1 around:1 exp:1 mapping:3 major:1 achieves:1 proc:1 spreading:16 wim:2 hasselmo:1 mit:1 clearly:1 gaussian:17 aim:2 occupied:1 avoid:1 mobile:3 publication:1 focus:1 improvement:2 likelihood:3 hk:1 suppression:2 detect:3 sense:2 entire:1 bt:8 typically:1 initially:1 spurious:1 issue:1 arg:3 augment:1 denoted:1 development:1 equal:1 zz:1 represents:5 future:1 mimic:1 others:1 stimulus:1 few:5 attempt:1 detection:9 interest:4 centralized:18 picturing:1 aazhang:1 evaluation:1 arrives:1 devoted:1 orthogonal:2 unless:1 column:2 deviation:1 snr:7 hundred:2 reported:2 corrupted:1 discriminating:1 jos:1 regressor:2 together:1 michael:2 juan:2 worse:3 conf:1 derivative:1 toy:1 b2:1 uoi:9 fernandez:1 depends:2 h1:1 recover:2 mud:34 minimize:1 square:5 variance:4 correspond:1 bayesian:2 produced:1 basically:1 carlo:1 cybernetics:1 detector:22 simultaneous:1 reach:1 suffers:1 gsm:1 sebastian:1 sharing:1 touretzky:1 frequency:4 associated:1 intermodulation:1 knowledge:2 improves:1 dimensionality:1 amplitude:7 javier:1 higher:3 furthermore:1 just:1 correlation:5 d:2 hand:1 christopher:1 nonlinear:23 mode:1 indicated:1 believe:2 hence:5 assigned:2 deal:1 width:1 spanish:2 noted:1 criterion:4 presenting:1 performs:1 novel:2 recently:1 common:1 functional:2 volume:1 caro:1 extend:1 significant:1 cambridge:1 erez:2 centre:1 funded:1 access:5 specification:1 surface:3 etc:1 base:1 posterior:1 showed:1 apart:1 scenario:1 exploited:1 transmitted:4 ministry:2 minimum:2 additional:1 employed:1 fernando:3 dashed:1 signal:16 multiple:4 needing:1 reduces:2 positioning:1 cross:1 long:2 retrieval:1 demodulation:1 prediction:3 regression:6 multilayer:1 expectation:1 kernel:6 achieved:2 fellowship:1 rest:1 kwi:1 db:11 sent:1 december:1 near:6 ideal:1 enough:1 bandwidth:1 reduce:4 idea:1 knowing:1 synchronous:3 remark:1 antonio:1 band:3 extensively:1 ten:1 svms:2 reduced:1 outperform:1 shifted:1 notice:1 neuroscience:1 rxx:1 changing:1 powerful:1 telecommunication:2 family:1 almost:1 decision:8 bit:20 hi:4 adapted:1 x2:2 speed:1 min:3 poor:1 slightly:1 wi:12 making:1 explained:1 interference:10 resource:1 remains:1 needed:2 know:5 available:2 gaussians:1 decentralized:18 multiplied:1 multipath:1 homogeneously:3 encounter:1 original:1 remaining:3 include:3 orsak:1 cdma:17 exploit:1 added:2 already:1 usual:1 diagonal:1 exhibit:1 distance:1 degrade:1 topic:1 code:24 besides:1 rzz:1 length:1 ratio:9 minimizing:1 providing:1 difficult:1 design:1 unknown:1 perform:1 fuentes:2 thermal:1 extended:1 communication:14 incorporated:1 station:1 david:1 namely:1 paris:1 extensive:1 nip:1 trans:1 able:6 beyond:1 usually:1 interferer:1 explanation:1 power:4 suitable:1 ramos:1 hr:1 marginalised:1 improve:1 prior:1 acknowledgement:1 highlight:1 generation:3 limitation:2 organised:1 analogy:1 versus:3 digital:4 h2:1 validation:1 sufficient:1 editor:1 share:2 interfering:3 changed:1 supported:1 wireless:2 last:1 rasmussen:1 allow:1 perceptron:1 ber:13 face:2 taking:1 distributed:4 boundary:2 curve:2 xn:2 adaptive:1 far:8 transaction:2 rafael:1 global:1 overfitting:1 receiver:26 b1:1 conclude:1 assumed:1 spectrum:1 postdoctoral:1 channel:13 robust:1 obtaining:1 excellent:1 did:2 spread:3 main:3 linearly:2 whole:1 noise:16 hyperparameters:2 x1:2 gatsby:2 lie:1 third:1 xt:11 symbol:4 constellation:1 svm:9 intrinsic:1 incorporating:1 umts:1 chen:1 mf:5 depicted:1 partially:1 modulate:1 goal:1 absence:1 man:1 included:1 typical:3 sampler:1 accepted:1 e:1 experimental:1 college:1 support:3 arises:1 dept:1 tested:1 |
1,972 | 2,792 | Robust Fisher Discriminant Analysis
Seung-Jean Kim
Alessandro Magnani
Stephen P. Boyd
Information Systems Laboratory
Electrical Engineering Department, Stanford University
Stanford, CA 94305-9510
[email protected]
[email protected]
[email protected]
Abstract
Fisher linear discriminant analysis (LDA) can be sensitive to the problem data. Robust Fisher LDA can systematically alleviate the sensitivity
problem by explicitly incorporating a model of data uncertainty in a classification problem and optimizing for the worst-case scenario under this
model. The main contribution of this paper is show that with general
convex uncertainty models on the problem data, robust Fisher LDA can
be carried out using convex optimization. For a certain type of product
form uncertainty model, robust Fisher LDA can be carried out at a cost
comparable to standard Fisher LDA. The method is demonstrated with
some numerical examples. Finally, we show how to extend these results
to robust kernel Fisher discriminant analysis, i.e., robust Fisher LDA in a
high dimensional feature space.
1
Introduction
Fisher linear discriminant analysis (LDA), a widely-used technique for pattern classification, finds a linear discriminant that yields optimal discrimination between two classes
which can be identified with two random variables, say X and Y in Rn . For a (linear)
discriminant characterized by w ? Rn , the degree of discrimination is measured by the
Fisher discriminant ratio
f (w, ?x , ?y , ?x , ?y ) =
wT (?x ? ?y )(?x ? ?y )T w
(wT (?x ? ?y ))2
= T
,
T
w (?x + ?y )w
w (?x + ?y )w
where ?x and ?x (?y and ?y ) denote the mean and covariance of X (Y). A discriminant
that maximizes the Fisher discriminant ratio is given by
wnom = (?x + ?y )?1 (?x ? ?y ),
which gives the maximum Fisher discriminant ratio
(?x ? ?y )T (?x + ?y )?1 (?x ? ?y ) = max f (w, ?x , ?y , ?x , ?y ).
w6=0
In applications, the problem data ?x , ?y , ?x , and ?y are not known but are estimated
from sample data. Fisher LDA can be sensitive to the problem data: the discriminant
wnom computed from an estimate of the parameters ?x , ?y , ?x , and ?y can give very
poor discrimination for another set of problem data that is also a reasonable estimate of the
parameters. In this paper, we attempt to systematically alleviate this sensitivity problem
by explicitly incorporating a model of data uncertainty in the classification problem and
optimizing for the worst-case scenario under this model.
We assume that the problem data ?x , ?y , ?x , and ?y are uncertain, but known to belong to
a convex compact subset U of Rn ? Rn ? Sn++ ? Sn++ . Here we use Sn++ (Sn+ ) to denote
the set of all n ? n symmetric positive definite (semidefinite) matrices. We make one
technical assumption: for each (?x , ?y , ?x , ?y ) ? U, we have ?x 6= ?y . This assumption
simply means that for each possible value of the means and covariances, the classes are
distinguishable via Fisher LDA.
The worst-case analysis problem of finding the worst-case means and covariances for a
given discriminant w can be written as
minimize f (w, ?x , ?y , ?x , ?y )
subject to (?x , ?y , ?x , ?y ) ? U,
(1)
with variables ?x , ?y , ?x , and ?y . The optimal value of this problem is the worst-case
Fisher discriminant ratio (over the class U of possible means and covariances), and any optimal points for this problem are called worst-case means and covariances. These depend
on w.
We will show in ?2 that (1) is a convex optimization problem, since the Fisher discriminant
ratio is a convex function of ?x , ?y , ?x , ?y for a given discriminant w. As a result, it is
computationally tractable to find the worst-case performance of a discriminant w over the
set of possible means and covariances.
The robust Fisher LDA problem is to find a discriminant that maximizes the worst-case
Fisher discriminant ratio. This can be cast as the optimization problem
maximize
subject to
min
(?x ,?y ,?x ,?y )?U
f (w, ?x , ?y , ?x , ?y )
w 6= 0,
(2)
with variable w. We denote any optimal w for this problem as w? . Here we choose a linear
discriminant that maximizes the Fisher discrimination ratio, with the worst possible means
and covariances that are consistent with our data uncertainty model.
The main result of this paper is to give an effective method for solving the robust Fisher
LDA problem (2). We will show in ?2 that the robust optimal Fisher discriminant w? can
be found as follows. First, we solve the (convex) optimization problem
minimize
max f (w, ?x , ?y , ?x , ?y ) = (?x ? ?y )T (?x + ?y )?1 (?x ? ?y )
w6=0
subject to (?x , ?y , ?x , ?y ) ? U,
(3)
with variables (?x , ?y , ?x , ?y ). Let (??x , ??y , ??x , ??y ) denote any optimal point. Then the
discriminant
?1 ?
w? = ??x + ??y
(?x ? ??y )
(4)
is a robust optimal Fisher discriminant, i.e., it is optimal for (2). Moreover, we will see
that ??x , ??y and ??x , ??y are worst-case means and covariances for the robust optimal Fisher
discriminant w? . Since convex optimization problems are tractable, this means that we
have a tractable general method for computing a robust optimal Fisher discriminant.
A robust Fisher discriminant problem of modest size can be solved by standard convex
optimization methods, e.g., interior-point methods [3]. For some special forms of the uncertainty model, the robust optimal Fisher discriminant can be solved more efficiently than
by a general convex optimization formulation. In ?3, we consider an important special form
for U for which a more efficient formulation can be given.
In comparison with the ?nominal? Fisher LDA, which is based on the means and covariances estimated from the sample data set without considering the estimation error, the
robust Fisher LDA performs well even when the sample size used to estimate the means
and covariances is small, resulting in estimates which are not accurate. This will be demonstrated with some numerical examples in ?4.
Recently, there has been a growing interest in kernel Fisher discriminant analysis i.e., Fisher
LDA in a higher dimensional feature space, e.g., [7]. Our results can be extended to robust
kernel Fisher discriminant analysis under certain uncertainty models. This will be briefly
discussed in ?5.
Various types of robust classification problems have been considered in the prior literature, e.g., [2, 5, 6]. Most of the research has focused on formulating robust classification
problems that can be efficiently solved via convex optimization. In particular, the robust
classification method developed in [6] is based on the criterion
g(w, ?x , ?y , ?x , ?y ) =
|wT (?x ? ?y )|
,
+ (wT ?y w)1/2
(wT ?x w)1/2
which is similar to the Fisher discriminant ratio f . With a specific uncertainty model on the
means and covariances, the robust classification problem with discrimination criterion g can
be cast as a second-order cone program, a special type of convex optimization problem [5].
With general uncertainty models, however, it is not clear whether robust discriminant analysis with g can be performed via convex optimization.
2
Robust Fisher LDA
We first consider the worst-case analysis problem (1). Here we consider the discriminant w
as fixed, and the parameters ?x , ?y , ?x , and ?y are variables, constrained to lie in the
convex uncertainty set U. To show that (1) is a convex optimization problem, we must
show that the Fisher discriminant ratio is a convex function of ?x , ?y , ?x , and ?y . To
show this, we express the Fisher discriminant ratio f as the composition
f (w, ?x , ?y , ?x , ?y ) = g(H(?x , ?y , ?x , ?y )),
where g(u, t) = u2 /t and H is the function
H(?x , ?y , ?x , ?y ) = (wT (?x ? ?y ), wT (?x + ?y )w).
The function H is linear (as a mapping from ?x , ?y , ?x , and ?y into R2 ), and the function
g is convex (provided t > 0, which holds here). Thus, the composition f is a convex
function of ?x , ?y , ?x , and ?y . (See [3].)
Now we turn to the main result of this paper. Consider a function of the form
R(w, a, B) =
(wT a)2
,
wT Bw
(5)
which is the Rayleigh quotient for the matrix pair aaT ? Sn+ and B ? Sn++ , evaluated at w.
The robust Fisher LDA problem (2) is equivalent to a problem of the form
maximize
subject to
min R(w, a, B)
(a,B)?V
w 6= 0,
(6)
where
a = ?x ??y ,
B = ?x +?y ,
V = {(?x ??y , ?x +?y ) | (?x , ?y , ?x , ?y ) ? U }. (7)
(This equivalence means that robust FLDA is a special type of robust matched filtering
problem studied in the 1980s; see, e.g., [8] for more on robust matched filtering.)
We will prove a ?nonconventional? minimax theorem for a Rayleigh quotient of the
form (5), which will establish the main result described in ?1. To do this, we consider
a problem of the form
minimize aT B ?1 a
(8)
subject to (a, B) ? V,
with variables a ? Rn , B ? Sn++ , and V is a convex compact subset of Rn ? Sn++ such
that for each (a, B) ? V, a is not zero. The objective of this problem is a matrix fractional
function and so is convex on Rn ? Sn++ ; see [3, ?3.1.7]. Our problem (3) is the same as (8),
with (7). It follows that (3) is a convex optimization problem.
The following theorem states the minimax theorem for the function R. While R is convex in
(a, B) for fixed w, it is not concave in w for fixed (a, B), so conventional convex-concave
minimax theorems do not apply here.
Theorem 1. Let (a? , B ? ) be an optimal solution to the problem (8), and let w? = B ? ?1 a? .
Then (w? , a? , B ? ) satisfies the minimax property
R(w? , a? , B ? ) = max min R(w, a, B) = min max R(w, a, B),
w6=0 (a,B)?V
(a,B)?V w6=0
(9)
and the saddle point property
R(w, a? , B ? ) ? R(w? , a? , B ? ) ? R(w? , a, B), ?w ? Rn \{0}, ?(a, B) ? V.
(10)
Proof. It suffices to prove (10), since the saddle point property (10) implies the minimax
property (9) [1, ?2.6]. We start by observing that R(w, a? , B ? ) is maximized over nonzero
w 6= 0 by w? = B ? ?1 a? (by the Cauchy-Schwartz inequality). What remains is to show
min R(w? , a, B) = R(w? , a? , B ? ).
(11)
(a,B)?V
Since a? and B ? are optimal for the convex problem (8) (by definition), they must satisfy
the optimality condition
D
E D
E
?a (aT B ?1 a)(a? ,B ? ) , (a ? a? ) + ?B (aT B ?1 a)(a? ,B ? ) , (B ? B ? )
? 0, ? (a, B) ? V
(see [3, ?4.2.3]). Using ?a (aT B ?1 a) = 2B ?1 a, ?B (aT B ?1 a) = ?B ?1 aaT B ?1 , and
hX, Y i = Tr(XY ) for X, Y ? Sn , where Tr denotes trace, we can express the optimality
condition as
2a? T B ? ?1 (a ? a? ) ? TrB ? ?1 a? a? T B ? ?1 (B ? B ? ) ? 0,
? (a, B) ? V,
or equivalently,
2w? T (a ? a? ) ? w? T (B ? B ? )w? ? 0,
? (a, B) ? V.
(12)
Now we turn to the convex optimization problem
minimize R(w? , a, B)
subject to (a, B) ? V,
(13)
with variables (a, B). We will show that (a? , B ? ) is optimal for this problem, which will
establish (11).
? is optimal for (13) if and only if
A pair (?
a, B)
+ *
+
*
(w?T a)2
(w?T a)2
?
, (a ? a
?) + ?B ?T
, (B ? B) ? 0,
?a ?T
w Bw? (?a,B)
w Bw? (?a,B)
?
?
? (a, B) ? V.
Using
?a
(w?T a)2
aT w? ?
=
2
w ,
w?T Bw?
w? Bw?
?B
(w?T a)2
(aT w? )2
=
?
w? w? T ,
w?T Bw?
(w?T Bw? )2
the optimality condition can be written as
2
(?
aT w? )2
a
?T w ?
?T
? ?T
?
w
(a
?
a
?
)
?
Tr
T
? ?
? ? )2 w w (B ? B)
w? Bw
(w? T Bw
(?
aT w? )2
a
?T w ?
?T
?T
? ?
w
(a
?
a
?
)
?
? ?
? ? )2 w (B ? B)w
w? T Bw
(w? T Bw
? 0, ? (a, B) ? V.
? = B ? , and noting that a?T w? /w?T B ? w? = 1, the optimality
Substituting a
? = a? , B
condition reduces to
=
2
2w? T (a ? a? ) ? w? T (B ? B ? )w? ? 0,
? (a, B) ? V,
which is precisely (12). Thus, we have shown that (a? , B ? ) is optimal for (13), which in
turn establishes (11).
3
Robust Fisher LDA with product form uncertainty models
In this section, we focus on robust Fisher LDA with the product form uncertainty model
U = M ? S,
(14)
where M is the set of possible means and S is the set of possible covariances. For this
model, the worst-case Fisher discriminant ratio can be written as
min
(?x ,?y ,?x ,?y )?U
f (a, ?x , ?y , ?x , ?y ) =
(wT (?x ? ?y ))2
.
(?x ,?y )?M max(?x ,?y )?S w T (?x + ?y )w
min
If we can find an analytic expression for max(?x ,?y )?S wT (?x + ?y )w (as a function of
w), we can simplify the robust Fisher LDA problem.
As a more specific example, we consider the case in which S is given by
S
Sx
Sy
= Sx ? Sy ,
? x kF ? ?x },
= {?x | ?x 0, k?x ? ?
? y kF ? ?y },
= {?y | ?y 0, k?y ? ?
(15)
? x, ?
? y ? Sn++ , and kAkF denotes the Frobenius norm
where ?x , ?y are positive constants, ?
Pn
2 1/2
of A, i.e., kAkF = ( i,j=1 Aij ) . For this case, we have
max
(?x ,?y )?S
?x + ?
? y + (?x + ?y )I)w.
wT (?x + ?y )w = wT (?
(16)
T
T ?
? ? Sn , maxk???k
Here we have used the fact that for given ?
+ ?I)x
? F ?? x ?x = x (?
++
(see, e.g., [6]). The worst-case Fisher discriminant ratio can be expressed as
(wT (?x ? ?y ))2
?x + ?
? y + (?x + ?y )I)w .
(?x ,?y )?M w T (?
min
This is the same worst-case Fisher discriminant ratio obtained for a problem in which the
? x + ?x I and ?
? y + ?y I, and the means lie in the set
covariances are certain, i.e., fixed to be ?
M. We conclude that a robust optimal Fisher discriminant with the uncertainty model (14)
in which S has the form (15) can be found by solving a robust Fisher LDA problem with
these fixed values for the covariances. From the general solution method described in ?1, it
is given by
?x + ?
? y + (?x + ?y )I ?1 (?? ? ?? ),
w? = ?
x
where
??x
and
??y
y
solve the convex optimization problem
?x + ?
? y + (?x + ?y )I
minimize (?x ? ?y )T ?
subject to (?x , ?y ) ? M,
?1
(?x ? ?y )
(17)
with variables ?x and ?y .
The problem (17) is relatively simple: it involves minimizing a convex quadratic function
over the set of possible ?x and ?y . For example, if M is a product of two ellipsoids, (e.g.,
?x and ?y each lie in some confidence ellipsoid) the problem (17) is to minimize a convex
quadratic subject to two convex quadratic constraints. Such a problem is readily solved in
O(n3 ) flops, since the dual problem has two variables, and evaluating the dual function
and its derivatives can be done in O(n3 ) flops [3]. Thus, the effort to solve the robust is
the same order (i.e., n3 ) as solving the nominal Fisher LDA (but with a substantially larger
constant).
4
Numerical results
To demonstrate robust Fisher LDA, we use the sonar and ionosphere benchmark problems
from the UCI repository (www.ics.uci.edu/?mlearn/MLRepository.html).
The two benchmark problems have 208 and 351 points, respectively, and the dimension
of each data point is 60 and 34, respectively. Each data set is randomly partitioned into
a training set and a test set. We use the training set to compute the optimal discriminant
and then test its performance using the test set. A larger training set typically gives better
test performance. We let ? denote the size of the training set, as a fraction of the total
number of data points. For example, ? = 0.3 means that 30% of the data points are used
for training, and 70% are used to test the resulting discriminant. For various values of ?,
we generate 100 random partitions of the data (for each of the two benchmark problems),
and collect the results.
We use the following uncertainty models for the means ?x , ?y and the covariances ?x , ?y :
(?x ? ?
?x )T Px (?x ? ?
?x ) ? 1,
T
(?y ? ?
?y ) Py (?y ? ?
?y ) ? 1,
? x kF ? ?x ,
k?x ? ?
?
k?y ? ?y kF ? ?y ,
? x, ?
? y represent
Here the vectors ?
?x , ?
?y represent the nominal means and the matrices ?
the nominal covariances, and the matrices Px , Py and the constants ?x and ?y represent
the confidence regions. The parameters are estimated through a resampling technique [4]
as follows. For a given training set we create 100 new sets by resampling the original
training set with a uniform distribution over all the data points. For each of these sets we
estimate its mean and covariance and then take their average values as the nominal mean
and covariance. We also evaluate the covariance ?? of all the means obtained with the
?1
resampling. We then take Px = ??1
? /n and Py = ?? /n. This choice corresponds
to a 50% confidence ellipsoid in the case of a Gaussian distribution. The parameters ?x
and ?y are taken to be the maximum deviations between the covariances and the average
covariances in the Frobenius norm sense, over the resampling of the training set.
ionosphere
TSA (%)
sonar
100
100
90
90
80
80
robust
nominal
70
70
60
50
20
robust
60
nominal
30
40
50 60
? (%)
70
80
50
0
10
20
30 40
? (%)
50
60
Figure 1: Test-set accuracy (TSA) for sonar and ionosphere benchmark versus size of the
training set. The solid line represents the robust Fisher LDA results and the dotted line the
nominal Fisher LDA results. The vertical bars represent the standard deviation.
Figure 1 summarizes the classification results. For each of our two problems, and for each
value of ?, we show the average test set accuracy (TSA), as well as the standard deviation
(over the 100 instances of each problem with the given value of ?). The plots show the
robust Fisher LDA performs substantially better than the nominal Fisher LDA for small
training sets, but this performance gap disappears as the training set becomes larger.
5
Robust kernel Fisher discriminant analysis
In this section we show how to ?kernelize? the robust Fisher LDA. We will consider only
a specific class of uncertainty models; the arguments we develop here can be extended to
more general cases. In the kernel approach we map the problem to an higher dimensional
space Rf via a mapping ? : Rn ? Rf so that the new decision boundary is more general
and possibly nonlinear. Let the data be mapped as
? ?(x) ), y ? ?(y) ? (?
? ?(y) ).
x ? ?(x) ? (?
??(x) , ?
??(y) , ?
The uncertainty model we consider has the form
??(x) ? ??(y) = ?
??(x) ? ?
??(y) + P uf , kuf k ? 1,
? ?(x) kF ? ?x , k??(y) ? ?
? ?(y) kF ? ?y .
k??(x) ? ?
(18)
? ?(x) , ?
? ?(y) repHere the vectors ?
??(x) , ?
??(y) represent the nominal means, the matrices ?
resent the nominal covariances, and the (positive semidefinite) matrix P and the constants
?x and ?y represent the confidence regions in the feature space. The worst-case Fisher
discriminant ratio in the feature space is then given by
min
? ?(x) kF ??x ,k??(y) ??
? ?(y) kF ??y
kuf k?1,k??(x) ??
(wfT (?
??(x) ? ?
??(y) + P uf ))2
wfT (??(x) + ??(y) )wf
.
The robust kernel Fisher discriminant analysis problem is to find the discriminant in the
feature space that maximizes this ratio.
Using the technique described in ?3, we can see that the robust kernel Fisher discriminant
analysis problem can be cast as
maximize
subject to
(wfT (?
??(x) ? ?
??(y) + P uf ))2
T (?
? ?(x) + ?
? ?(y) + (?x + ?y )I)wf
kuf k?1 wf
wf 6= 0,
min
(19)
where the discriminant wf ? Rf is defined in the new feature space.
To apply the kernel trick to the problem (19), the nonlinear decision boundary should be
entirely expressed in terms of inner products of the mapped data only. The following
proposition tells us a set of conditions to do so.
N
y
x
Proposition 1. Given the sample points {xi }N
??(x) ,?
??(y) ,
i=1 and {yi }i=1 , suppose that ?
?
?
??(x) ,??(y) , and P can be written as
PNx
PNy
?i ?(xi ), ?
??(y) = i=1
?i+Nx ?(yi ), P = U ?U T ,
?
??(x) = i=1
P
? ?(x) = Nx ?i,i (?(xi ) ? ?
?
??(x) )(?(xi ) ? ?
??(x) )T ,
i=1
? ?(y) = PNy ?i+N ,i+N (?(yi ) ? ?
??(y) )(?(yi ) ? ?
??(y) )T ,
?
x
x
i=1
N +N
N +N
where ? ? RNx +Ny , ? ? S+x y , ? ? S+x y is a diagonal matrix, and U is a matrix
Ny
x
whose columns are the vectors {?(xi ) ? ?
??(x) }N
??(y) }i=1
. Denote as ?
i=1 and {?(yi ) ? ?
Ny
Nx
the matrix whose columns are the vectors {?(xi )}i=1 , {?(yi )}i=1 and define
D1 = K?,
D2 = K(I ? ?1TN )?(I ? ?1TN )K T ,
D3 = K(I ? ?1TN )?(I ? ?1TN )K T + (?x + ?y )K,
D4 = K,
where K is the kernel matrix Kij = (?T ?)ij , 1N is a vector of ones of length Nx + Ny ,
and ? ? RNx +Ny is such that ?i = ?i for i = 1, . . . , Nx and ?i = ??i for i = Nx +
1, . . . , Nx + Ny . Let ? ? be an optimal solution of the problem
maximize
subject to
? T (D1 + D2 ?)(D1 + D2 ?)T ?
? T D3 ?
4 ??1
?=
6 0.
min
?T D
(20)
Then, wf? = ?? ? is an optimal solution of the problem (19). Moreover, for every point
z ? Rn ,
Ny
Nx
X
X
?T
?
?
wf ?(z) =
?i K(z, xi ) +
?i+N
K(z, yi ).
(21)
x
i=1
i=1
Along the lines of the proofs of Corollary 5 in [6], we can prove this proposition.
References
[1] D. Bertsekas, A. Nedi?c, and A. Ozdaglar. Convex Analysis and Optimization. Athena Scientific,
2003.
[2] C. Bhattacharyya. Second order cone programming formulations for feature selection. Journal
of Machine Learning Research, 5:1417?1433, 2004.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[4] B. Efron and R.J. Tibshirani. An Introduction to Bootstrap. Chapman and Hall, London UK,
1993.
[5] K. Huang, H. Yang, I. King, M. Lyu, and L. Chan. The minimum error minimax probability
machine. Journal of Machine Learning Research, 5:1253?1286, 2004.
[6] G. Lanckriet, L. El Ghaoui, C. Bhattacharyya, and M. Jordan. A robust minimax approach to
classification. Journal of Machine Learning Research, 3:555?582, 2002.
[7] S. Mika, G. R?atsch, and K. M?uller. A mathematical programming approach to the kernel Fisher
algorithm, 2001. In Advances in Neural Information Processing Systems, 13, pp. 591-597, MIT
Press.
[8] S. Verd?u and H. Poor. On minimax robustness: A general approach and applications. IEEE
Transactions on Information Theory, 30(2):328?340, 1984.
| 2792 |@word establish:2 repository:1 briefly:1 quotient:2 implies:1 norm:2 involves:1 objective:1 symmetric:1 laboratory:1 d2:3 nonzero:1 covariance:22 diagonal:1 tr:3 solid:1 mapped:2 hx:1 mlrepository:1 d4:1 suffices:1 criterion:2 athena:1 alleviate:2 nx:8 proposition:3 kuf:3 demonstrate:1 discriminant:45 bhattacharyya:2 tn:4 performs:2 cauchy:1 w6:4 hold:1 length:1 considered:1 ic:1 hall:1 ellipsoid:3 recently:1 written:4 readily:1 must:2 mapping:2 lyu:1 numerical:3 resent:1 partition:1 substituting:1 equivalently:1 analytic:1 trace:1 plot:1 tsa:3 estimation:1 discrimination:5 resampling:4 extend:1 belong:1 discussed:1 composition:2 vertical:1 sensitive:2 cambridge:1 benchmark:4 create:1 establishes:1 trb:1 uller:1 wft:3 mit:1 maxk:1 extended:2 gaussian:1 flop:2 rn:10 pn:1 mathematical:1 along:1 magnani:1 corollary:1 prove:3 focus:1 chan:1 optimizing:2 cast:3 pair:2 minimizing:1 scenario:2 certain:3 inequality:1 kim:1 sense:1 wf:7 growing:1 yi:7 el:1 bar:1 minimum:1 pattern:1 typically:1 considering:1 maximize:4 becomes:1 provided:1 stephen:1 moreover:2 matched:2 maximizes:4 classification:9 dual:2 html:1 what:1 reduces:1 technical:1 max:7 substantially:2 constrained:1 special:4 developed:1 characterized:1 minimax:8 finding:1 chapman:1 flda:1 represents:1 every:1 disappears:1 concave:2 carried:2 schwartz:1 uk:1 ozdaglar:1 simplify:1 kernel:10 randomly:1 represent:6 bertsekas:1 sn:12 positive:3 prior:1 engineering:1 aat:2 literature:1 kf:8 kakf:2 bw:11 filtering:2 attempt:1 versus:1 interest:1 subject:10 mika:1 rf:3 degree:1 studied:1 equivalence:1 collect:1 rnx:2 jordan:1 consistent:1 systematically:2 semidefinite:2 noting:1 yang:1 accurate:1 identified:1 definite:1 xy:1 inner:1 aij:1 bootstrap:1 modest:1 whether:1 expression:1 boundary:2 dimension:1 boyd:3 uncertain:1 confidence:4 instance:1 column:2 kij:1 effort:1 program:1 evaluating:1 interior:1 selection:1 transaction:1 cost:1 deviation:3 py:3 www:1 equivalent:1 conventional:1 demonstrated:2 map:1 subset:2 uniform:1 clear:1 compact:2 convex:30 focused:1 nedi:1 conclude:1 generate:1 xi:7 dotted:1 sensitivity:2 vandenberghe:1 estimated:3 pnx:1 tibshirani:1 sonar:3 robust:42 ca:1 kernelize:1 express:2 nominal:11 suppose:1 programming:2 verd:1 choose:1 possibly:1 lanckriet:1 trick:1 huang:1 d3:2 main:4 derivative:1 fraction:1 cone:2 uncertainty:16 electrical:1 solved:4 worst:15 region:2 satisfy:1 reasonable:1 explicitly:2 ny:7 performed:1 decision:2 summarizes:1 alessandro:1 observing:1 comparable:1 entirely:1 start:1 lie:3 seung:1 quadratic:3 theorem:5 depend:1 solving:3 contribution:1 minimize:6 accuracy:2 precisely:1 constraint:1 specific:3 efficiently:2 maximized:1 yield:1 sy:2 pny:2 n3:3 r2:1 ionosphere:3 incorporating:2 argument:1 various:2 min:11 formulating:1 optimality:4 relatively:1 px:3 effective:1 london:1 uf:3 department:1 mlearn:1 sx:2 tell:1 poor:2 gap:1 rayleigh:2 distinguishable:1 simply:1 saddle:2 jean:1 whose:2 stanford:5 widely:1 solve:3 say:1 larger:3 definition:1 pp:1 partitioned:1 expressed:2 proof:2 u2:1 ghaoui:1 corresponds:1 taken:1 satisfies:1 computationally:1 remains:1 fractional:1 efron:1 turn:3 king:1 product:5 tractable:3 uci:2 fisher:56 higher:2 apply:2 wt:14 evaluate:1 formulation:3 evaluated:1 done:1 frobenius:2 called:1 total:1 robustness:1 atsch:1 original:1 denotes:2 nonlinear:2 ratio:15 develop:1 lda:27 measured:1 ij:1 scientific:1 d1:3 |
1,973 | 2,793 | A Bayes Rule for Density Matrices
Manfred K. Warmuth?
Computer Science Department
University of California at Santa Cruz
[email protected]
Abstract
The classical Bayes rule computes the posterior model probability
from the prior probability and the data likelihood. We generalize
this rule to the case when the prior is a density matrix (symmetric
positive definite and trace one) and the data likelihood a covariance
matrix. The classical Bayes rule is retained as the special case when
the matrices are diagonal.
In the classical setting, the calculation of the probability of the
data is an expected likelihood, where the expectation is over the
prior distribution. In the generalized setting, this is replaced by an
expected variance calculation where the variance is computed along
the eigenvectors of the prior density matrix and the expectation is
over the eigenvalues of the density matrix (which form a probability vector). The variances along any direction is determined
by the covariance matrix. Curiously enough this expected variance calculation is a quantum measurement where the co-variance
matrix specifies the instrument and the prior density matrix the
mixture state of the particle. We motivate both the classical and
the generalized Bayes rule with a minimum relative entropy principle, where the Kullbach-Leibler version gives the classical Bayes
rule and Umegaki?s quantum relative entropy the new Bayes rule
for density matrices.
1
Introduction
In [TRW05] various on-line updates were generalized from vector parameters to
matrix parameters. Following [KW97], the updates were derived by minimizing the
loss plus a divergence to the last parameter. In this paper we use the same method
for deriving a Bayes rule for density matrices (symmetric positive definite matrices
of trace one). When the parameters are probability vectors over the set of models,
then the ?classical? Bayes rule can be derived using the relative entropy as the
divergence (e.g.[KW99, SWRL03]). Analogously we now use the quantum relative
entropy, introduced by Umegaki, to derive the generalized Bayes rule.
?
Supported by NSF grant CCR 9821087. Some of this work was done while visiting
National ICT Australia in Canberra
Figure 1: We update the prior four times
based on the same data likelihood vector
P (y|Mi ). The initial posteriors are close
to the prior but eventually the posteriors
focus their weight on argmaxi P (y|Mi ).
The classical Bayes rule may be seen as
a soft maximum calculation.
Figure 2: We depict seven iterations of the
generalized Bayes rule with the bold NWSE ellipse as the prior density and the bolddashed SE-NW ellipse as data covariance matrix. The posterior density matrices (dashed)
gradually move from the prior to the longest
axis of the covariance matrix.
The new rule uses matrix logarithms and exponentials to avoid the fact that symmetric positive definite matrices are not closed under the matrix product. The rule
is strikingly similar to the classical Bayes rule and retains the latter as a special
case when the matrices are diagonal. Various cancellations occur when the classical
Bayes rule is applied iteratively and similar cancellations happen with the new rule.
We shall see that the classical Bayes rule may be seen a soft maximum calculation
and the new rule as a soft calculation of the eigenvector with the largest eigenvalue
(See figures 1 and 2).
The mathematics applied in this paper is most commonly used in quantum physics.
For example, the data likelihood becomes a quantum measurement. It is tempting
to call the new rule the ?quantum Bayes rule?. However, we have no physical
interpretation of the this rule. The measurement does not collapse our state and
we don?t use the unitary evolution of a state to model the rule. Also, the term
?quantum Bayes rule? has been claimed before in [SBC01] where the classical Bayes
rule is used to update probabilities that happen to arise in the context of quantum
physics. In contrast, in this paper our parameters are density matrices.
Our work is most closely related to a paper by Cerf and Adam [CA99] who also
give a formula for conditional densities that relies on the matrix exponential and
logarithm. However they are interested in the multivariate case (which requires the
use of tensors) and their motivation is to obtain a generalization of a conditional
quantum entropy. We hope to build on the great body of work done with the
classical Bayes rule in the statistics community and therefore believe that this line
of research holds great promise.
2
The Classical Bayes Rule
To establish a common notation we begin by introducing the familiar Bayes rule.
Assume we have n models M1 , . . . , Mn . In the classical setup, model Mi is chosen
with prior probability P (Mi ) and then Mi generates a datum y with probability
P (y|Mi ). After observing y, the posterior probabilities of model Mi are calculated
via Bayes Rule:
P (Mi |y)
=
P (Mi )P (y|Mi )
P
.
j P (Mj )P (y|Mj )
(1)
Figure 3:
An ellipse S in R2 :
The
eigenvectors are the directions of the axes
and the eigenvalues their lengths.
Ellipses are weighted combinations of the onedimensional degenerate ellipses (dyads) corresponding to the axes. (For unit u, the dyad
uu> is a degenerate one-dimensional ellipse
with its single axis in direction u.) The solid
curve of the ellipse is a plot of Su and the
outer dashed figure eight is direction u times
the variance u> Su. At the eigenvectors, this
variance equals the eigenvalues and touches
the ellipse.
Figure 4: When the ellipse S and T
don?t have the same span, then S T
lies in the intersection of both spans
and is a degenerate ellipse of dimension one (bold line). This generalizes
the following intersection property of
the matrix product when S and T are
both diagonal (here of dimension four):
diag(S) diag(T ) diag(ST )
0
0
0
a
0
0
.
0
0
b
ab
a
b
See Figure 1 for a bar plot of the effect of the update on the posterior. By the
Theorem of Total Probability, the expected likelihood in the denominator equals
P (y). In a moment we will replace this expected likelihood by an expected variance.
3
Density Matrices as Priors
We now let our prior D be an arbitrary symmetric positive1 definite matrix of
trace one. Such matrices are called density matrices in quantum physics.
P An outer
product uuT , where u has unit length is called a dyad. Any mixture i ?i ai a>
i of
dyads ai a>
i is a density matrix as long as the coefficients ?i are non-negative and
sum to one. This is true even if the number of dyads is larger or smaller than the
dimension
of D. The trace of such a mixture is one because dyads have trace one
P
and i ?i = 1. Of course any density matrix D can be decomposed based on an
eigensystem. That is, D = D?D > where DD > = I. Now the vector of eigenvalues
(?i ) forms a probability vector equal to the dimension of the density.
In quantum physics, the dyads are called pure states and density matrices are mixtures over such states. Note that in this paper we want to address the statistics
community and use linear algebra notation instead of Dirac notation. The probability
vector (P (Mi )) can be represented as a diagonal matrix diag((P (Mi ))) =
P
>
P
(M
i ) ei ei , where ei denotes the ith standard basis vector. This means that
i
1
We use the convention that positive definite matrices have non-negative eigenvalues
and strictly positive definite matrices have positive eigenvalues.
probability vectors are special density matrices where the eigenvectors are fixed to
the standard basis vectors.
4
Co-variance Matrices and Basic Notation
In this paper we replace the (conditional) data likelihoods P (y|Mi ) by a data covariance matrix D(y|.) (symmetric positive definite matrix). We now discuss such
matrices in more detail.
A covariance matrix S can be depicted as an ellipse {Su : ||u||2 ? 1} centered
at the origin, where the eigenvectors form the principal axes and the eigenvalues
are the lengths of the axes (See Figure 3). Assume S is the covariance matrix
of some random cost vector c ? Rn , i.e. S = E (c ? E(c)(c ? E(c))> . Note
that a covariance matrix S is diagonal if the components of the cost vector are
independent. The variance of the cost vector c along a unit vector u has the form
2
2
V(c> u) = E( c> u ? E(c> u) ) = E( (c> ? E(c> )) u ) = u> Su
and the variance along an eigenvector is the corresponding eigenvalue (See Figure
3). Using this interpretation, the matrix S may be seen as a mapping S(.) from
the unit ball to R?0 , i.e. S(u) = u> Su.
>
A second
of the?scalar
of u w.r.t. the
? interpretation
? u Su?is the2 square length
>
>
basis S, that is u Su = u
S Su = || Su||2 . Thirdly, uT Su is a quantum
measurement of the pure state u with an instrument represented by S. Since the
square length of u w.r.t. any orthogonal basis S is one, any such basis turns the
2
unit vector into an n-dimensional probability vector ((u> si )P
). Now u> Su is the
>
expected eigenvalue w.r.t. this probability vector: u Su = i ?i (u> si )2 .
The trace tr(A) of a square matrix A is the sum of its diagonal elements Aii . Recall
that tr(AB) = tr(BA) for any matrices A ? Rn?m , B ? Rm?n . The trace is
unitarily invariant, i.e. for any orthogonal matrix U , tr(U AU > ) = tr(U > U A) =
tr(A). Also, tr(uu> A) = tr(u> Au) = u> Au. Therefore the trace of a square
matrix may be seen as the total variance along any set of orthogonal directions:
X
X
u>
ui u>
tr(A) = tr(IA) = tr(
i Aui .
i A) =
i
i
In particular, the trace of a square matrix is the sum of its eigenvalues.
The matrix exponential exp(S) of the symmetric matrix S = S?S > is defined
as S exp(?)S > , where exp(?) is obtained by exponentiating the diagonal entries
(eigenvalues). The matrix logarithm log(S) is defined similarly but now S must
be strictly positive definite. Clearly, the two functions are inverses of each other. It
is important to remember that exp (S + T ) = exp(S) exp(T ) only holds iff the
two symmetric matrices commute2 , i.e. ST = T S. However, the following trace
inequality, known as the Golden-Thompson inequality [Bha97], always holds:
tr(exp S exp T ) ? tr(exp (S + T )).
5
(2)
The Generalized Bayes Rule
The following experiment underlies the more general setup: If the prior is D(.) =
P
>
>
is chosen with probability ?i and
i ?i di di , then the dyad (or pure state) di di
a random variable c> di is observed where c has covariance matrix D(y|.).
2
This occurs iff the two symmetric matrices have the same eigensystem.
In
P our generalization we replace the expected data likelihood P (y)
i P (Mi )P (y|Mi ) by the following trace:
X
X
?i di > D(y|.)di .
?i di di > D(y|.)) =
tr(D(.)D(y|.)) = tr(
=
i
i
>
Recall that di D(y|.)di is the variance of c in direction di : i.e. V(c> di ). Therefore
the above trace is the expected variance along the eigenvectors of the density matrix
weighted by the eigenvalues. Curiously enough, this trace computation is a quantum
measurement, where D(y|.) represents the instrument and D(.) the mixture state
of the particle.
In the generalized Bayes rule we cannot simply multiply the prior density matrix
with the covariance matrix that corresponds to the data likelihood. This is because
a product of two symmetric positive definite matrices may be neither symmetric
nor positive definite. Instead we define the operation on the cone of symmetric
positive definite matrices. We begin by defining this operation for the case when
the matrices S and T are strictly positive definite (and symmetric):
S T := exp(log S + log T ).
(3)
The matrix log of both matrices produces symmetric matrices that sum to a symmetric matrix. Finally the matrix exponential of the sum produces again a symmetric positive matrix. Note that the matrix log is not defined when the matrix
has a zero eigenvalue. However for arbitrary symmetric positive definite matrices
one can define the operation as the following limit:
S T := lim (S 1/n T 1/n )n .
n??
This limit is the Lie Product Formula [Bha97] when S and T are both strictly
positive, but it exists even if the matrices don?t have full rank and by Theorem 1.2
of [Sim79],
range(S T ) = range(S) ? range(T ).
Assume that k is the dimension of range(S) ? range(T ), that B is an orthonormal
basis of range(S) ? range(T ) (i.e. B ? Rn?k , B T B = Ik , and range(B) =
range(S) ? range(T )) and that log+ denotes the modified matrix logarithm that
takes logs of the non-zero eigenvalues but leaves zero eigenvalues unchanged. Then
by the same theorem3 ,
S T = B exp(B T (log+ S + log+ T )B) B T .
(4)
When both matrices have the same eigensystem, then becomes the matrix product. One can show that is associative, commutative, has the identity matrix I as
its neutral element and for any strictly positive definite and symmetric matrix S,
S S ?1 = I. Finally, (cS) T = c(S T ), for any non-negative scalar.
Using this new product operation, the generalized Bayes rule becomes:
D(.|y) =
D(.) D(y|.)
.
tr(D(.) D(y|.))
(5)
Normalizing by the trace assures that the trace of the posterior density matrix is
one. As we see in Figure 2, this posterior moves toward the largest axis of the data
covariance matrix and the new rule can be interpreted as a soft calculation of the
3
? log(B
? T S B)
? B
? T , where B
? is
The log+ S term in the formula can be replaced by B
an orthonormal basis of range(S), and similarly for log+ T .
?1
0
?
Assume the prior density matrix is the circle D(.) =
and the data
0 12
?
?
?
?
0 0
1 ?1
covariance matrix the degenerate NE-SW ellipse D(y|.) = 12
=U
U >,
1
0 1
?1
?
? 1
1
?
?
2
2
. Now for all diagonal matrices S(.), tr(S(.) D(y|.)) = 21 , i.e.
where U =
1
1
? ??
2
2
1
0
2
Figure 5:
B
largest eigenvalue is not ?visible? in basis I. But tr B
@
C
D(y|.)C
U ( 00 01 ) U >
A = 1.
|
{z
}
D(.|y) of new rule
eigenvector with maximum eigenvalue. When the matrices D(.) and D(y|.) have
the same eigensystem, then becomes the matrix multiplication. In particular,
when the prior is diag((P (Mi ))) and the covariance matrix diag((P (y|Mi )), then
the new rule realizes the classical rule and computes diag((P (Mi |y)). Figure 5 gives
an example that shows how the off-diagonal elements can be exploited by the new
rule.
In the classical Bayes rule, the normalization factor is the expected data likelihood.
In the case of the generalized Bayes rule, the expected variance only upper bounds
the normalization factor via the Golden-Thompsen inequality (2):
tr(D(.)D(y|.)) ? tr(D(.) D(y|.)).
(6)
The classical Bayes rule can be applied iteratively to a sequence of data and various
cancellations occur. For the sake of simplicity we only consider two data points
y1 , y 2 :
P (Mi )P (y1 |Mi )P (y2 |Mi , y1 )
P (Mi |y1 )P (y2 |Mi , y1 )
=
.
P (Mi |y2 y1 ) =
P (y2 |y1 )
P (y2 y1 )
X
X
P (Mi )P (y1 |Mi ))
P (y2 |y1 )P (y1 ) = (
P (Mi |y1 ) P (y2 |Mi , y1 ))(
| {z }
i
=
X
use(1)
i
P (Mi )P (y1 |Mi )P (y2 |Mi , y1 ) = P (y2 y1 ).
i
Analogously,
D(.|y2 y1 ) =
D(.|y1 ) D(y2 |., y1 )
D(.) D(y1 |.) D(y2 |., y1 )
=
.
tr(D(.|y1 ) D(y2 |., y1 ))
tr(D(.) D(y1 |.) D(y2 |., y1 ))
(7)
Finally, the product of the expected variance for both trials combine in a similar
way, except that in the generalized case the equality becomes an inequality:
tr(D(.|y1 )D(y2 |., y1 )) tr(D(.)D(y1 |.))
? tr(D(.|y1 )) D(y2 |., y1 )) tr(D(.) D(y1 |.))
| {z }
use(5)
= ? log tr(D(.) D(y1 |.) D(y2 |., y1 )).
The above inequality is an instantiation of the Golden-Thompsen inequality (2) and
the above equality generalizes the middle equality in (7).
6
The Derivation of the Generalized Bayes Rule
The classical Bayes rule can be derived4 by minimizing a relative entropy to the
prior plus a convex combination of the log losses of the models (See e.g. [KW99,
SWRL03]):
X
X
?i
?i log P (y|Mi ).
?
?i ln
inf
P
P (Mi )
?i ?0,
i ?i =1
i
i
Without the relative entropy, the argument of the infimum is linear in the weights ?i
and is minimized when all weight is placed on the maximum likelihood models, i.e.
the set of indices argmaxi P (y|Mi ). The negative entropy ameliorates the maximum
calculation and pulls the optimal solution towards the prior. Observe that the
non-negativity constraints can be dropped since the entropy acts as a barrier. By
introducing a Lagrange multiplier for the remaining constraint and differentiating,
i )P (y|Mi )
, which is the classical Bayes rule (1).
we obtain the solution ?i? = PP (M
j P (Mj )P (y|Mj )
?
By plugging ?i into the argument of the infimum we obtain the optimum value
? ln P (y). Notice that this is minus the logarithm of the normalization of the Bayes
rule (1) and is also the log loss associated the standard Bayesian setup.
To derive the new generalized Bayes rule in an analogous way, we use the quantum
physics generalizations of the relative entropy between two densities G and D (due
to Umegaki): tr(G(log G ? log D)). We also need to replace the mixture of negative
log likelihoods by the trace ?tr(G log D(y|.)). Now the matrix parameter G is
constrained to be a density matrix and the minimization problem becomes5 :
G
inf
tr(G(log G ? log D(.)) ? tr(G log D(y|.))
dens.matr.
Except for the quantum relative entropy term, the argument of the infimum is again
linear in the variable G and is minimized when G is a single dyad uu> , where u
is the eigenvector belonging to maximum eigenvalue of the matrix log D(y|.). The
linear term pulls G toward a direction of high variance of this matrix, whereas the
quantum relative entropy pulls G toward the prior density matrix. The density
matrix constraint requires the eigenvalues of G to be non-negative and the trace to
G to be one. The entropy works as a barrier for the non-negativity constraints and
thus these constraints can be dropped. Again by introducing a Lagrange multiplier
for the remaining trace constraint and differentiating (following [TRW05]), we arrive
at a formula for the optimum G ? which coincides with the formula for the D(.|y)
given in the generalized Bayes rule (5), where is defined6 as in (3). Since the
quantum relative entropy is strictly convex [NC00] in G, the optimum G ? is unique.
4
For the sake of simplicity assume that for all i, P (Mi ) and P (y|Mi ) are non-negative.
Assume here that D(.) and D(y|.) are both strictly positive definite.
6
With some work, one can also derive the Bayes rule with the fancier operation (4).
5
7
Conclusion
Our generalized Bayes rule suggests a definition of conditional density matrices and
we are currently developing a calculus for such matrices. In particular, a common
formalism is needed that includes the multivariate conditional density matrices defined in [CA99] based on tensors.
In this paper we only considered real symmetric matrices. However, our methods
immediately generalize to complex Hermitian matrices, i.e square matrices in Cn?n
T
for which S = S = S ? . Now both the prior density matrix and the data covariance
matrix must be Hermitian instead of symmetric.
The generalized Bayes rule for symmetric positive definite matrices relies on computing eigendecompositions (?(n3 ) time). Hopefully, there exist O(n2 ) versions of
the update that approximate the generalized Bayes rule sufficiently well.
Extensive research has been done in the so-called ?expert framework? (see
e.g.[KW99] for a list of references) where a mixture over experts is maintained
by the on-line algorithm for the purpose of performing as well as the best expert
chosen in hindsight. In preliminary research we showed that one can maintain a
density matrix over the base experts instead and derive updates similar to the generalized Bayes rule given in this paper. Most importantly, the bounds generalize to
the case when mixtures over experts are replaced by density matrices.
Acknowledgment: We would like to thank Dima Kuzmin for his extensive help
with all aspects of this paper. Thanks also to Torsten Ehrhardt who first proved to
us the range intersection and projection properties of the operation.
References
[Bha97] R. Bhatia. Matrix Analysis. Springer, Berlin, 1997.
[CA99] N. J. Cerf and C. Adam. Quantum extension of conditional probability.
Physical Review A, 60(2):893?897, August 1999.
[KW97] J. Kivinen and M. K. Warmuth. Additive versus exponentiated gradient
updates for linear prediction. Information and Computation, 132(1):1?
64, January 1997.
[KW99] J. Kivinen and M. K. Warmuth. Averaging expert predictions. In Computational Learning Theory: 4th European Conference (EuroCOLT ?99),
pages 153?167, Berlin, March 1999. Springer.
[NC00] M.A. Nielsen and I.L. Chuang. Quantum Computation and Quantum
Information. Cambridge University Press, 2000.
[SBC01] R. Schack, T. A. Brun, and C. M. Caves. Quantum Bayes rule. Physical
Review A, 64(014305), 2001.
[Sim79] Barry Simon. Functional Integration and Quantum Physics. Academic
Press, New York, 1979.
[SWRL03] R. Singh, M. K. Warmuth, B. Raj, and P. Lamere. Classificaton with
free energy at raised temperatures. In Proc. of EUROSPEECH 2003,
pages 1773?1776, September 2003.
[TRW05] K. Tsuda, G. R?
atsch, and M. K. Warmuth. Matrix exponentiated gradient updates for on-line learning and Bregman projections. Journal of
Machine Learning Research, 6:995?1018, June 2005.
| 2793 |@word trial:1 torsten:1 middle:1 version:2 calculus:1 covariance:14 tr:31 solid:1 minus:1 moment:1 initial:1 si:2 must:2 cruz:1 additive:1 visible:1 happen:2 plot:2 update:9 depict:1 leaf:1 warmuth:5 ith:1 manfred:2 cse:1 along:6 ucsc:1 ik:1 combine:1 hermitian:2 expected:12 nor:1 eurocolt:1 decomposed:1 becomes:5 begin:2 notation:4 interpreted:1 eigenvector:4 hindsight:1 cave:1 remember:1 golden:3 act:1 rm:1 dima:1 unit:5 grant:1 positive:18 before:1 dropped:2 limit:2 plus:2 au:3 suggests:1 co:2 collapse:1 range:12 unique:1 acknowledgment:1 definite:16 projection:2 cannot:1 close:1 lamere:1 context:1 thompson:1 convex:2 simplicity:2 immediately:1 pure:3 rule:52 importantly:1 deriving:1 orthonormal:2 pull:3 his:1 analogous:1 us:1 origin:1 element:3 observed:1 kw99:4 ui:1 motivate:1 singh:1 algebra:1 basis:8 strikingly:1 aii:1 various:3 represented:2 derivation:1 argmaxi:2 bhatia:1 larger:1 statistic:2 associative:1 sequence:1 eigenvalue:20 product:8 iff:2 degenerate:4 dirac:1 optimum:3 produce:2 adam:2 help:1 derive:4 c:1 uu:3 convention:1 direction:7 closely:1 centered:1 australia:1 generalization:3 kullbach:1 preliminary:1 strictly:7 extension:1 hold:3 sufficiently:1 considered:1 exp:11 great:2 nw:1 mapping:1 purpose:1 proc:1 realizes:1 currently:1 largest:3 weighted:2 hope:1 minimization:1 clearly:1 always:1 modified:1 avoid:1 bha97:3 derived:2 focus:1 ax:4 june:1 longest:1 rank:1 likelihood:13 contrast:1 interested:1 constrained:1 special:3 integration:1 raised:1 equal:3 represents:1 minimized:2 divergence:2 national:1 classificaton:1 familiar:1 replaced:3 maintain:1 ab:2 multiply:1 mixture:8 bregman:1 orthogonal:3 logarithm:5 circle:1 tsuda:1 formalism:1 soft:4 retains:1 cost:3 introducing:3 entry:1 neutral:1 eigendecompositions:1 eurospeech:1 st:2 density:31 thanks:1 physic:6 off:1 analogously:2 again:3 expert:6 bold:2 includes:1 coefficient:1 closed:1 observing:1 bayes:39 simon:1 square:6 ehrhardt:1 variance:17 who:2 generalize:3 bayesian:1 definition:1 energy:1 pp:1 associated:1 mi:37 di:13 proved:1 recall:2 lim:1 ut:1 nielsen:1 done:3 touch:1 su:12 ei:3 hopefully:1 brun:1 infimum:3 believe:1 unitarily:1 effect:1 true:1 y2:17 multiplier:2 evolution:1 equality:3 trw05:3 symmetric:20 leibler:1 iteratively:2 maintained:1 coincides:1 generalized:17 eigensystem:4 temperature:1 common:2 functional:1 physical:3 thirdly:1 interpretation:3 m1:1 onedimensional:1 measurement:5 cambridge:1 ai:2 mathematics:1 similarly:2 particle:2 cancellation:3 base:1 posterior:8 multivariate:2 showed:1 raj:1 inf:2 claimed:1 inequality:6 exploited:1 seen:4 minimum:1 tempting:1 dashed:2 barry:1 full:1 academic:1 calculation:8 long:1 ellipsis:2 plugging:1 ameliorates:1 underlies:1 basic:1 prediction:2 denominator:1 expectation:2 iteration:1 normalization:3 whereas:1 want:1 call:1 unitary:1 enough:2 cn:1 curiously:2 dyad:9 york:1 santa:1 eigenvectors:6 se:1 cerf:2 specifies:1 exist:1 nsf:1 notice:1 ccr:1 shall:1 promise:1 four:2 neither:1 matr:1 sum:5 cone:1 inverse:1 arrive:1 bound:2 datum:1 occur:2 aui:1 constraint:6 n3:1 sake:2 generates:1 aspect:1 argument:3 span:2 performing:1 department:1 developing:1 combination:2 ball:1 kw97:2 belonging:1 march:1 smaller:1 den:1 gradually:1 invariant:1 ln:2 assures:1 discus:1 eventually:1 turn:1 needed:1 instrument:3 generalizes:2 operation:6 eight:1 observe:1 chuang:1 denotes:2 remaining:2 sw:1 build:1 ellipse:10 establish:1 classical:19 unchanged:1 tensor:2 move:2 occurs:1 diagonal:9 visiting:1 september:1 gradient:2 thank:1 berlin:2 outer:2 seven:1 toward:3 length:5 retained:1 index:1 minimizing:2 setup:3 trace:18 negative:7 ba:1 upper:1 january:1 defining:1 y1:33 rn:3 arbitrary:2 august:1 community:2 introduced:1 extensive:2 california:1 address:1 bar:1 ia:1 kivinen:2 mn:1 ne:1 axis:3 negativity:2 prior:20 ict:1 review:2 multiplication:1 relative:10 loss:3 versus:1 principle:1 dd:1 course:1 supported:1 last:1 placed:1 free:1 exponentiated:2 barrier:2 differentiating:2 curve:1 calculated:1 dimension:5 computes:2 quantum:22 commonly:1 exponentiating:1 approximate:1 instantiation:1 don:3 mj:4 complex:1 european:1 diag:7 the2:1 motivation:1 arise:1 n2:1 body:1 kuzmin:1 canberra:1 exponential:4 lie:2 formula:5 theorem:2 uut:1 r2:1 list:1 normalizing:1 exists:1 commutative:1 entropy:14 intersection:3 depicted:1 simply:1 lagrange:2 scalar:2 springer:2 corresponds:1 relies:2 conditional:6 identity:1 towards:1 replace:4 determined:1 except:2 averaging:1 principal:1 total:2 called:4 atsch:1 latter:1 fancier:1 |
1,974 | 2,794 | Structured Prediction via the Extragradient
Method
Ben Taskar
Computer Science
UC Berkeley, Berkeley, CA 94720
[email protected]
Simon Lacoste-Julien
Computer Science
UC Berkeley, Berkeley, CA 94720
[email protected]
Michael I. Jordan
Computer Science and Statistics
UC Berkeley, Berkeley, CA 94720
[email protected]
Abstract
We present a simple and scalable algorithm for large-margin estimation of structured models, including an important class of Markov networks and combinatorial models. We formulate the estimation problem
as a convex-concave saddle-point problem and apply the extragradient
method, yielding an algorithm with linear convergence using simple gradient and projection calculations. The projection step can be solved using combinatorial algorithms for min-cost quadratic flow. This makes the
approach an efficient alternative to formulations based on reductions to
a quadratic program (QP). We present experiments on two very different
structured prediction tasks: 3D image segmentation and word alignment,
illustrating the favorable scaling properties of our algorithm.
1
Introduction
The scope of discriminative learning methods has been expanding to encompass prediction
tasks with increasingly complex structure. Much of this recent development builds upon
graphical models to capture sequential, spatial, recursive or relational structure, but as we
will discuss in this paper, the structured prediction problem is broader still. For graphical
models, two major approaches to discriminative estimation have been explored: (1) maximum conditional likelihood [13] and (2) maximum margin [6, 1, 20]. For the broader class
of models that we consider here, the conditional likelihood approach is intractable, but the
large margin formulation yields tractable convex problems.
We interpret the term structured output model very broadly, as a compact scoring scheme
over a (possibly very large) set of combinatorial structures and a method for finding the
highest scoring structure. In graphical models, the scoring scheme is embodied in a probability distribution over possible assignments of the prediction variables as a function of
input variables. In models based on combinatorial problems, the scoring scheme is usually a simple sum of weights associated with vertices, edges, or other components of a
structure; these weights are often represented as parametric functions of a set of features.
Given training instances labeled by desired structured outputs (e.g., matchings) and a set of
features that parameterize the scoring function, the learning problem is to find parameters
such that the highest scoring outputs are as close as possible to the desired outputs.
Example of prediction tasks solved via combinatorial optimization problems include bipartite and non-bipartite matching in alignment of 2D shapes [5], word alignment in natural
language translation [14] and disulfide connectivity prediction for proteins [3]. All of these
problems can be formulated in terms of a tractable optimization problem. There are also
interesting subfamilies of graphical models for which large-margin methods are tractable
whereas likelihood-based methods are not; an example is the class of Markov random fields
with restricted potentials used for object segmentation in vision [12, 2].
Tractability is not necessarily sufficient to obtain algorithms that work effectively in practice. In particular, although the problem of large margin estimation can be formulated as a
quadratic program (QP) in several cases of interest [2, 19], and although this formulation
exploits enough of the problem structure so as to achieve a polynomial representation in
terms of the number of variables and constraints, off-the-shelf QP solvers scale poorly with
problem and training sample size for these models. To solve large-scale machine learning
problems, researchers often turn to simple gradient-based algorithms, in which each individual step is cheap in terms of computation and memory. Examples of this approach in the
structured prediction setting include the Structured Sequential Minimal Optimization algorithm [20, 18] and the Structured Exponentiated Gradient algorithm [4]. These algorithms
are first-order methods for solving QPs arising from low-treewidth Markov random fields
and other decomposable models. They are able to scale to significantly larger problems
than off-the-shelf QP solvers. However, they are limited in scope in that they rely on dynamic programming to compute essential quantities such as gradients. They do not extend
to models in which dynamic programming is not applicable, for example, to problems such
as matchings and min-cuts.
In this paper, we present an estimation methodology for structured prediction problems
that does not require a general-purpose QP solver. We propose a saddle-point formulation
which allows us to exploit simple gradient-based methods [11] with linear convergence
guarantees. Moreover, we show that the key computational step in these methods?a certain projection operation?inherits the favorable computational complexity of the underlying optimization problem. This important result makes our approach viable computationally. In particular, for matchings and min-cuts, projection involves a min-cost quadratic
flow computation, a problem for which efficient, highly-specialized algorithms are available. We illustrate the effectiveness of this approach on two very different large-scale
structured prediction tasks: 3D image segmentation and word alignment in translation.
2
Structured models
We begin by discussing two special cases of the general framework that we subsequently
present: (1) a class of Markov networks used for segmentation, and (2) a bipartite matching
model for word alignment. Despite significant differences in the setup for these models,
they share the property that in both cases the problem of finding the highest-scoring output
can be formulated as a linear program (LP).
Markov networks. We consider a special class of Markov networks, common in vision
applications, in which inference reduces to a tractable min-cut problem [7]. Focusing on
binary variables, y = {y1 ,Q
. . . , yN }, andQ
pairwise potentials, we define a joint distribution
over {0, 1}N via P (y) ? j?V ?j (yj ) jk?E ?jk (yj , yk ), where (V, E) is an undirected
graph, and where {?j (yj ); j ? V} are the node potentials and {?jk (yj , yk ), jk ? E} are
the edge potentials.
In image segmentation (see Fig. 1(a)), the node potentials capture local evidence about
the label of a pixel or laser scan point. Edges usually connect nearby pixels in an image,
and serve to correlate their labels. Assuming that such correlations tend to be positive
What
is
the
an ti c i p ate d
c o s t
o f
c o l l e c ti n g
fe e s
u n d e r
the
n e w
p r o p o s al
?
E n
v e r tu
d e
le s
n o u v e lle s
p r o p o s i ti o n s
,
q u e l
e s t
le
c o ?t
p r ?v u
d e
p e r c e p ti o n
d e
le s
d r o i ts
?
(a)
(b)
Figure 1: Examples of structured prediction applications: (a) articulated object segmentation and (b) word alignment in machine translation.
(connected nodes tend to have the same label), we restrict the form of edge potentials to be
of the form ?jk (yj , yk ) = exp{?sjk 1I(yj 6= yk )}, where sjk is a non-negative penalty for
assigning yj and yk different
labels. Expressing node potentials
nP
o as ?j (yj ) = exp{sj yj },
P
we have P (y) ? exp
j?V sj yj ?
jk?E sjk 1I(yj 6= yk ) . Under this restriction of the
potentials, it is known that the problem of computing the maximizing assignment, y ? =
arg max P (y | x), has a tractable formulation as a min-cut problem [7]. In particular, we
obtain the following LP:
X
X
max
s j zj ?
sjk zjk s.t. zj ? zk ? zjk , zk ? zj ? zjk , ?jk ? E. (1)
0?z?1
j?V
jk?E
In this LP, a continuous variable zj is a relaxation of the binary variable yj . Note that the
constraints are equivalent to |zj ? zk | ? zjk . Because sjk is positive, zjk = |zk ? zj | at the
maximum, which is equivalent to 1I(zj 6= zk ) if the zj , zk variables are binary. An integral
optimal solution always exists, as the constraint matrix is totally unimodular [17] (that is,
the relaxation is not an approximation).
We can parametrize the node and edge weights sj and sjk in terms of user-provided features
xj and xjk associated with the nodes and edges. In particular, in 3D range data, xj might be
spin image features or spatial occupancy histograms of a point j, while xjk might include
the distance between points j and k, the dot-product of their normals, etc. The simplest
model of dependence is a linear combination of features: sj = wn> fn (xj ) and sjk =
we> fe (xjk ), where wn and we are node and edge parameters, and fn and fe are node and
edge feature mappings, of dimension dn and de , respectively. To ensure non-negativity
of sjk , we assume the edge features fe to be non-negative and restrict we ? 0. This
constraint is easily incorporated into the formulation we present below. We assume that
the feature mappings f are provided by the user and our goal is to estimate parameters
w from labeled
Pdata. We abbreviate
P the score assigned to a labeling y for an input x as
w> f (x, y) = j yj wn> fn (xj ) ? jk?E yjk we> fe (xjk ), where yjk = 1I(yj 6= yk ).
Matchings. Consider modeling the task of word alignment of parallel bilingual sentences (see Fig. 1(b)) as a maximum weight bipartite matching problem, where the nodes
V = V s ? V t correspond to the words in the ?source? sentence (V s ) and the ?target? sentence (V t ) and the edges E = {jk : j ? V s , k ? V t } correspond to possible alignments
between them. For simplicity, assume that each word aligns to one or zero words in the
other sentence. The edge weight sjk represents the degree to which word j in one sentence
can translate into the word k in the other sentence. Our objective is to find an alignment that
maximizes the sum of edge scores. We represent a matching using a set of binary variables
yjk that are set to 1 if word j is assigned to word k in the other sentence,
and 0 otherwise.
P
The score of an assignment is the sum of edge scores: s(y) = jk?E sjk yjk . The maximum weight bipartite matching problem, arg maxy?Y s(y), can be found by solving the
following LP:
X
X
X
sjk zjk s.t.
zjk ? 1, ?k ? V t ;
zjk ? 1, ?j ? V s ,
(2)
max
0?z?1
jk?E
j?V s
k?V t
where again the continuous variables zjk correspond to the relaxation of the binary variables yjk . As in the min-cut problem, this LP is guaranteed to have integral solutions for
any scoring function s(y) [17].
For word alignment, the scores sjk can be defined in terms of the word pair jk and input
features associated with xjk . We can include the identity of the two words, relative position
in the respective sentences, part-of-speech tags, string similarity (for detecting cognates),
>
etc. We let sjk
jk ) for some user-provided feature mapping f and abbreviate
P= w f (x
>
w f (x, y) = jk yjk w> f (xjk ).
General structure. More generally, we consider prediction problems in which the input x ? X is an arbitrary structured object and the output is a vector of values y =
(y1 , . . . , yLx ), for example, a matching or a cut in the graph. We assume that the length
Lx and the structure of y depend deterministically on the input x. In our word alignment
example, the output space is defined by the length of the two sentences.
S Denote the output
space for a given input x as Y(x) and the entire output space as Y = x?X Y(x).
Consider the class of structured prediction models H defined by the linear family: h w (x) =
arg maxy?Y(x) w> f (x, y), where f (x, y) is a vector of functions f : X ? Y 7? IRn . This
formulation is very general. Indeed, it is too general for our purposes?for many f , Y pairs,
finding the optimal y is intractable. Below, we specialize to the class of models in which
the arg max problem can be solved in polynomial time using linear programming (and
more generally, convex optimization); this is still a very large class of models.
3
Max-margin estimation
We assume a set of training instances S = {(xi , yi )}m
i=1 , where each instance consists
of a structured object xi (such as a graph) and a target solution yi (such as a matching).
Consider learning the parameters w in the conditional likelihood
P setting. We>can define
0
Pw (y | x) = Zw1(x) exp{w> f (x, y)}, where Zw (x) =
y0 ?Y(x) exp{w f (x, y )},
P
and maximize the conditional log-likelihood i log Pw (yi | xi ), perhaps with additional
regularization of the parameters w. However, computing the partition function Z w (x)
is #P-complete [23, 10] for the two structured prediction problems we presented above,
matchings and min-cuts. Instead, we adopt the max-margin formulation of [20], which
directly seeks to find parameters w such that: yi = arg maxyi0 ?Yi w> f (xi , yi0 ), ?i,
where Yi = Y(xi ) and yi denotes the appropriate vector of variables for example i. The
solution space Yi depends on the structured object xi ; for example, the space of possible
matchings depends on the precise set of nodes and edges in the graph.
As in univariate prediction, we measure the error of prediction using a loss function
`(yi , yi0 ). To obtain a convex formulation, we upper bound the loss `(yi , hw (xi )) using
the hinge function: maxyi0 ?Yi [w> fi (yi0 ) + `i (yi0 )] ? w> fi (yi ), where `i (yi0 ) = `(yi , yi0 ),
and fi (yi0 ) = f (xi , yi0 ). Minimizing this upper bound will force the true structure yi to be
2
optimal with respect to w for each instance i. We add a standard L2 weight penalty ||w||
2C :
min
w?W
||w||2 X
+
max [w> fi (yi0 ) + `i (yi0 )] ? w> fi (yi ),
2C
yi0 ?Yi
i
(3)
where C is a regularization parameter and W is the space of allowed weights (for example,
W = IRn or W = IRn+ ). Note that this formulation is equivalent to the standard formulation
using slack variables ? and slack penalty C presented in [20, 19].
The key to solving Eq. (3) efficiently is the loss-augmented inference problem,
maxyi0 ?Yi [w> fi (yi0 ) + `i (yi0 )]. This optimization problem has precisely the same form as
the prediction problem whose parameters we are trying to learn?maxyi0 ?Yi w> fi (yi0 )?
but with an additional term corresponding to the loss function. Tractability of the lossaugmented inference thus depends not only on the tractability of maxyi0 ?Yi w> fi (yi0 ), but
also on the form of the loss term `i (yi0 ). A natural choice in this regard is the Hamming
distance, which simply counts the number of variables in which a candidate solution y i0
differs from the target output yi . In general, we need only assume that the loss function
decomposes over the variables in yi .
For example, in the case of bipartite matchings the Hamming loss counts thePnumber of
0
different edges in the matchings yi and yi0 and can be written as: `H
i (yi ) =
jk yi,jk +
P
0
)y
.
Thus
the
loss-augmented
matching
problem
for
example
i can be
(1
?
2y
i,jk i,jk
jk
P
written as an LP similar to Eq. (2) (without the constant term jk yi,jk ):
X
X
X
max
zi,jk [w> f (xi,jk ) + 1 ? 2yi,jk ] s.t.
zi,jk ? 1,
zi,jk ? 1.
0?z?1
j
jk
k
Generally, when we can express maxyi0 ?Yi w> fi (yi0 ) as an LP, maxzi ?Zi w> Fi zi , where
Zi = {zi : Ai zi ? bi , zi ? 0}, for appropriately defined constraints Ai , bi and feature matrix Fi , we have a similar LP for the loss-augmented inference for each example i: di + maxzi ?Zi (w> Fi + ci )> zi for appropriately defined di , Fi , ci , Ai , bi . Let
z = {z1 , . . . , zm }, Z = Z1 ? . . . ? Zm .
We could proceed by making use of Lagrangian duality, which yields a joint convex optimization problem; this is the approach described in [19]. Instead we take a different tack
here, posing the problem in its natural saddle-point form:
min max
w?W z?Z
||w||2 X >
>
w F i zi + c >
+
i zi ? w fi (yi ) .
2C
i
(4)
As we discuss in the following section, this approach allows us to exploit the structure of
W and Z separately, allowing for efficient solutions for a wider range of structure spaces.
4
Extragradient method
The key operations of the method we present below are gradient calculations and Euclidean
2
P >
>
>
projections. We let L(w, z) = ||w||
i w Fi zi + ci zi ? w fi (yi ) , with gradients
2C +
P
>
given by: ?w L(w, z) = w
i Fi zi ? fi (yi ) and ?zi L(w, z) = Fi w + ci . We denote
C +
0
the projection of a vector zi onto Zi as ? Zi (zi ) = arg minz0i ?Zi ||zi ? zi || and similarly,
the projection onto W as ? W (w0 ) = arg minw?W ||w0 ? w||.
A well-known solution strategy for saddle-point optimization is provided by the extragradient method [11]. An iteration of the extragradient method consists of two very simple
steps, prediction (w, z) ? (wp , zp ) and correction (wp , zp ) ? (wc , zc ):
wp = ? W (w ? ??w L(w, z));
wc = ? W (w ? ??w L(wp , zp ));
zpi = ? Zi (zi + ??zi L(w, z));
zci = ? Zi (zi + ??zi L(wp , zp ));
(5)
(6)
where ? is an appropriately chosen step size. The algorithm starts with a feasible point
w = 0, zi ?s that correspond to the assignments yi ?s and step size ? = 1. After each prep p
,z )||
diction step, it computes r = ? ||?L(w,z)??L(w
(||w?wp ||+||z?zp ||) . If r is greater than a threshold ?, the
step size is decreased using an Armijo type rule: ? = (2/3)? min(1, 1/r), and a new prediction step is computed until r ? ?, where ? ? (0, 1) is a parameter of the algorithm. Once
a suitable ? is found, the correction step is taken and (w c , zc ) becomes the new (w, z). The
method is guaranteed to converge linearly to a solution w ? , z? [11, 9]. See the longer version of this paper at http://www.cs.berkeley.edu/?taskar/extragradient.pdf
for details. By comparison, Exponentiated Gradient [4] has sublinear convergence rate
guarantees, while Structured SMO [18] has none.
The key step influencing the efficiency of the algorithm is the Euclidean projection onto
the feasible sets W and Zi . In case W = IRn , the projection is the identity operation;
projecting onto IRn+ consists of clipping negative weights to zero. Additional problemspecific constraints on the weight space can be efficiently incorporated in this step (although
linear convergence guarantees only hold for polyhedral W). In case of word alignment, Z i
is the convex hull of bipartite matchings and the problem reduces to the much-studied
minimum cost quadratic flow problem. The projection zi = ? Zi (z0i ) is given by
X1
X
X
0
? zi,jk )2 s.t.
min
(zi,jk
zi,jk ? 1,
zi,jk ? 1.
0?z?1
2
j
jk
k
We use a standard reduction of bipartite matching to min-cost flow by introducing a source
node s linked to all the nodes in Vis (words in the ?source? sentence), and a sink node t
linked from all the nodes in Vit (words in the ?target? sentence), using edges of capacity 1
0
and cost 0. The original edges jk have a quadratic cost 12 (zi,jk
? zi,jk )2 and capacity 1.
0
Minimum (quadratic) cost flow from s to t is the projection of zi onto Zi .
The reduction of the projection to minimum quadratic cost flow for the min-cut
polytope Zi is shown in the longer version of the paper. Algorithms for solving
this problem are nearly as efficient as those for solving regular min-cost flow problems. In case of word alignment, the running time scales with the cube of the
sentence length. We use publicly-available code for solving this problem [8] (see
http://www.math.washington.edu/?tseng/netflowg_nl/).
5
Experiments
We investigate two structured models we described above: bipartite matchings for word
alignments and restricted potential Markov nets for 3D segmentation. A commercial QPsolver, MOSEK, runs out of memory on the problems we describe below using the QP
formulation [19]. We compared the extragradient method with the averaged perceptron
algorithm [6]. A question which arises in practice is how to choose the regularization
parameter C. The typical approach is to run the algorithm for several values of the regularization parameter and pick the best model using a validation set. For the averaged
perceptron, a standard method is to run the algorithm tracking its performance on a validation set, and selecting the model with best performance. We use the same training regime
for the extragradient by running it with C = ?.
Object segmentation. We test our algorithm on a 3D scan segmentation problem using the class of Markov networks with potentials that were described above. The dataset
is a challenging collection of cluttered scenes containing articulated wooden puppets [2].
It contains eleven different single-view scans of three puppets of varying sizes and positions, with clutter and occluding objects such as rope, sticks and rings. Each scan consists of around 7, 000 points. Our goal was to segment the scenes into two classes?
puppet and background. We use five of the scenes for our training data, three for validation and three for testing. Sample scans from the training and test set can be seen at
http://www.cs.berkeley.edu/?taskar/3DSegment/. We computed spin images
of size 10 ? 5 bins at two different resolutions, then scaled the values and performed PCA
to obtain 45 principal components, which comprised our node features. We used the surface links output by the scanner as edges between points and for each edge only used a
0.2
0.15
percep ? error
extrag ? error
0.1
0.1
0.05
0
0
0.15
0.05
100
200
300
Iterations
400
500
0
600
0.14
0.13
extrag ? loss
0.12
Test AER
extrag ? loss
0.13
Train Loss / # nodes
Test Error
0.15
0.15
percep ? AER
extrag ? AER
0.14
0.12
0.11
0.11
0.1
0.1
0.09
0.09
0.08
0.08
0.07
0.07
0.06
0
100
200
300
Iterations
400
500
Train Loss / # edges
0.2
0.06
600
(a)
(b)
Figure 2: Both plots show test error for the averaged perceptron and the extragradient (left
y-axis) and training loss per node or edge for the extragradient (right y-axis) versus number
of iterations for (a) object segmentation task and (b) word alignment task.
single feature, set to a constant value of 1 for all edges. This results in all edges having the
same potential. The training data contains approximately 37, 000 nodes and 88, 000 edges.
Training time took about 4 hours for 600 iterations on a 2.80GHz Pentium 4 machine.
Fig. 2(a) shows that the extragradient has a consistently lower error rate (about 3% for extragradient, 4% for averaged perceptron), using only slightly more expensive computations
per iteration. Also shown is the corresponding decrease in the hinge-loss upperbound on
the training data as the extragradient progresses.
Word alignment. We also tested our learning algorithm on word-level alignment using a
data set from the 2003 NAACL set [15], the English-French Hansards task. This corpus
consists of 1.1M automatically aligned sentences, and comes with a validation set of 39
sentence pairs and a test set of 447 sentences. The validation and test sentences have been
hand-aligned and are marked with both sure and possible alignments. Using these align|
ments, alignment error rate (AER) is calculated as: AER(A, S, P ) = 1 ? |A?S|+|A?P
.
|A|+|S|
Here, A is a set of proposed index pairs, S is the set of sure gold pairs, and P is the set of
possible gold pairs (where S ? P ).
We used the intersection of the predictions of the English-to-French and French-to-English
IBM Model 4 alignments (using GIZA++ [16]) on the first 5000 sentence pairs from the
1.1M sentences. The number of edges for 5000 sentences was about 555,000. We tested
on the 347 hand-aligned test examples, and used the validation set to select the stopping
point. The features on the word pair (ej , fk ) include measures of association, orthography,
relative position, predictions of generative models (see [22] for details). It took about 3
hours to perform 600 training iterations on the training data using a 2.8GHz Pentium 4
machine. Fig. 2(b) shows the extragradient performing slightly better (by about 0.5%)
than average perceptron.
6
Conclusion
We have presented a general solution strategy for large-scale structured prediction problems. We have shown that these problems can be formulated as saddle-point optimization
problems, problems that are amenable to solution by the extragradient algorithm. Key
to our approach is the recognition that the projection step in the extragradient algorithm
can be solved by network flow algorithms. Network flow algorithms are among the most
well-developed in the field of combinatorial optimization, and yield stable, efficient algorithmic platforms. We have exhibited the favorable scaling of this overall approach in
two concrete, large-scale learning problems. It is also important to note that the general
approach extends to a much broader class of problems. In [21], we show how to apply
this approach efficiently to other types of models, including general Markov networks and
weighted context-free grammars, using Bregman projections.
Acknowledgments
We thank Paul Tseng for kindly answering our questions about his min-cost flow code.
This work was funded by the DARPA CALO project (03-000219) and Microsoft Research
MICRO award (05-081). SLJ was also supported by an NSERC graduate sholarship.
References
[1] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden Markov support vector machines. In
Proc. ICML, 2003.
[2] D. Anguelov, B. Taskar, V. Chatalbashev, D. Koller, D. Gupta, G. Heitz, and A. Ng. Discriminative learning of Markov random fields for segmentation of 3d scan data. In CVPR, 2005.
[3] P. Baldi, J. Cheng, and A. Vullo. Large-scale prediction of disulphide bond connectivity. In
Proc. NIPS, 2004.
[4] P. Bartlett, M. Collins, B. Taskar, and D. McAllester. Exponentiated gradient algorithms for
large-margin structured classification. In NIPS, 2004.
[5] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape
contexts. IEEE Trans. Pattern Anal. Mach. Intell., 24, 2002.
[6] M. Collins. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proc. EMNLP, 2002.
[7] D. M. Greig, B. T. Porteous, and A. H. Seheult. Exact maximum a posteriori estimation for
binary images. J. R. Statist. Soc. B, 51, 1989.
[8] F. Guerriero and P. Tseng. Implementation and test of auction methods for solving generalized network flow problems with separable convex cost. Journal of Optimization Theory and
Applications, 115(1):113?144, October 2002.
[9] B.S. He and L. Z. Liao. Improvements of some projection methods for monotone nonlinear
variational inequalities. JOTA, 112:111:128, 2002.
[10] M. Jerrum and A. Sinclair. Polynomial-time approximation algorithms for the Ising model.
SIAM J. Comput., 22, 1993.
[11] G. M. Korpelevich. The extragradient method for finding saddle points and other problems.
Ekonomika i Matematicheskie Metody, 12:747:756, 1976.
[12] S. Kumar and M. Hebert. Discriminative fields for modeling spatial dependencies in natural
images. In NIPS, 2003.
[13] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. In ICML, 2001.
[14] E. Matusov, R. Zens, and H. Ney. Symmetric word alignments for statistical machine translation. In Proc. COLING, 2004.
[15] R. Mihalcea and T. Pedersen. An evaluation exercise for word alignment. In Proceedings of
the HLT-NAACL 2003 Workshop, Building and Using parallel Texts: Data Driven Machine
Translation and Beyond, pages 1?6, Edmonton, Alberta, Canada, 2003.
[16] F. Och and H. Ney. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1), 2003.
[17] A. Schrijver. Combinatorial Optimization: Polyhedra and Efficiency. Springer, 2003.
[18] B. Taskar. Learning Structured Prediction Models: A Large Margin Approach. PhD thesis,
Stanford University, 2004.
[19] B. Taskar, V. Chatalbashev, D. Koller, and C. Guestrin. Learning structured prediction models:
a large margin approach. In ICML, 2005.
[20] B. Taskar, C. Guestrin, and D. Koller. Max margin Markov networks. In NIPS, 2003.
[21] B. Taskar, S. Lacoste-Julien, and M. Jordan. Structured prediction, dual extragradient and
Bregman projections. Technical report, UC Berkeley Statistics Department, 2005.
[22] B. Taskar, S. Lacoste-Julien, and D. Klein. A discriminative matching approach to word alignment. In EMNLP, 2005.
[23] L. G. Valiant. The complexity of computing the permanent. Theoretical Computer Science,
8:189?201, 1979.
| 2794 |@word illustrating:1 version:2 pw:2 polynomial:3 yi0:18 seek:1 pick:1 reduction:3 contains:2 score:5 selecting:1 percep:2 assigning:1 written:2 fn:3 partition:1 hofmann:1 cheap:1 eleven:1 shape:3 plot:1 generative:1 mccallum:1 problemspecific:1 detecting:1 math:1 node:18 lx:1 five:1 dn:1 viable:1 specialize:1 consists:5 polyhedral:1 baldi:1 pairwise:1 indeed:1 alberta:1 automatically:1 solver:3 totally:1 becomes:1 begin:1 provided:4 moreover:1 underlying:1 maximizes:1 project:1 what:1 string:1 developed:1 finding:4 guarantee:3 berkeley:12 ti:4 concave:1 unimodular:1 scaled:1 puppet:3 stick:1 och:1 yn:1 segmenting:1 positive:2 influencing:1 local:1 despite:1 mach:1 approximately:1 might:2 studied:1 challenging:1 limited:1 range:2 bi:3 averaged:4 graduate:1 acknowledgment:1 yj:14 testing:1 recursive:1 practice:2 differs:1 hansard:1 mihalcea:1 maxzi:2 significantly:1 projection:16 matching:11 word:29 regular:1 protein:1 altun:1 onto:5 close:1 tsochantaridis:1 giza:1 context:2 restriction:1 equivalent:3 www:3 lagrangian:1 maximizing:1 vit:1 convex:7 cluttered:1 formulate:1 resolution:1 decomposable:1 simplicity:1 rule:1 his:1 target:4 commercial:1 user:3 exact:1 programming:3 expensive:1 jk:36 recognition:2 cut:8 ising:1 labeled:2 taskar:11 solved:4 capture:2 parameterize:1 connected:1 decrease:1 highest:3 yk:7 complexity:2 dynamic:2 depend:1 solving:7 segment:1 serve:1 upon:1 bipartite:9 efficiency:2 matchings:10 sink:1 easily:1 joint:2 darpa:1 represented:1 various:1 laser:1 articulated:2 train:2 describe:1 labeling:2 whose:1 larger:1 solve:1 cvpr:1 stanford:1 tested:2 otherwise:1 grammar:1 statistic:2 jerrum:1 sequence:1 net:1 took:2 propose:1 product:1 zm:2 tu:1 aligned:3 translate:1 poorly:1 achieve:1 gold:2 convergence:4 zp:5 qps:1 ben:1 object:9 wider:1 illustrate:1 ring:1 metody:1 progress:1 eq:2 soc:1 c:5 involves:1 treewidth:1 come:1 subsequently:1 hull:1 calo:1 mcallester:1 bin:1 sjk:13 require:1 correction:2 hold:1 scanner:1 around:1 normal:1 exp:5 scope:2 mapping:3 algorithmic:1 major:1 adopt:1 purpose:2 estimation:7 favorable:3 proc:4 applicable:1 combinatorial:7 label:4 bond:1 weighted:1 always:1 shelf:2 ej:1 varying:1 broader:3 inherits:1 improvement:1 consistently:1 polyhedron:1 likelihood:5 pentium:2 wooden:1 inference:4 posteriori:1 stopping:1 chatalbashev:2 i0:1 entire:1 hidden:2 irn:5 koller:3 pixel:2 arg:7 among:1 overall:1 classification:1 dual:1 development:1 spatial:3 special:2 platform:1 uc:4 cube:1 field:6 once:1 having:1 washington:1 ng:1 represents:1 icml:3 nearly:1 pdata:1 mosek:1 np:1 report:1 micro:1 intell:1 individual:1 microsoft:1 interest:1 highly:1 investigate:1 evaluation:1 alignment:24 yielding:1 amenable:1 bregman:2 edge:25 integral:2 respective:1 minw:1 euclidean:2 desired:2 xjk:6 theoretical:1 minimal:1 instance:4 modeling:2 assignment:4 clipping:1 cost:11 tractability:3 vertex:1 introducing:1 comprised:1 too:1 connect:1 dependency:1 siam:1 probabilistic:1 off:2 systematic:1 michael:1 concrete:1 connectivity:2 again:1 thesis:1 containing:1 choose:1 possibly:1 emnlp:2 zen:1 sinclair:1 potential:11 upperbound:1 de:1 permanent:1 depends:3 vi:1 performed:1 view:1 linked:2 start:1 parallel:2 simon:1 spin:2 publicly:1 efficiently:3 yield:3 correspond:4 zci:1 pedersen:1 none:1 researcher:1 aligns:1 hlt:1 associated:3 di:2 hamming:2 dataset:1 segmentation:11 focusing:1 methodology:1 formulation:12 correlation:1 until:1 hand:2 nonlinear:1 french:3 perhaps:1 zjk:9 building:1 naacl:2 true:1 regularization:4 assigned:2 symmetric:1 wp:6 generalized:1 trying:1 pdf:1 rope:1 complete:1 auction:1 image:8 variational:1 fi:19 common:1 specialized:1 qp:6 extend:1 association:1 he:1 interpret:1 significant:1 expressing:1 anguelov:1 ai:3 fk:1 similarly:1 language:1 dot:1 funded:1 stable:1 similarity:1 longer:2 surface:1 etc:2 add:1 align:1 korpelevich:1 recent:1 driven:1 certain:1 inequality:1 binary:6 discussing:1 yi:31 scoring:8 seen:1 minimum:3 additional:3 greater:1 guestrin:2 converge:1 maximize:1 encompass:1 reduces:2 technical:1 calculation:2 award:1 prediction:26 scalable:1 liao:1 vision:2 lossaugmented:1 histogram:1 represent:1 iteration:7 orthography:1 whereas:1 background:1 separately:1 decreased:1 source:3 appropriately:3 zw:1 exhibited:1 sure:2 tend:2 undirected:1 flow:11 lafferty:1 effectiveness:1 jordan:3 enough:1 wn:3 xj:4 zi:43 restrict:2 greig:1 pca:1 bartlett:1 penalty:3 speech:1 proceed:1 generally:3 ylx:1 clutter:1 statist:1 simplest:1 http:3 zj:8 arising:1 per:2 klein:1 broadly:1 express:1 key:5 threshold:1 lacoste:3 graph:4 relaxation:3 monotone:1 sum:3 run:3 extends:1 family:1 scaling:2 bound:2 guaranteed:2 cheng:1 quadratic:8 aer:5 constraint:6 precisely:1 disulfide:1 scene:3 nearby:1 tag:1 wc:2 min:16 kumar:1 performing:1 slj:1 separable:1 structured:25 department:1 combination:1 ate:1 cognate:1 increasingly:1 y0:1 slightly:2 lp:8 making:1 maxy:2 projecting:1 restricted:2 taken:1 computationally:1 discus:2 turn:1 slack:2 count:2 tractable:5 available:2 operation:3 parametrize:1 apply:2 appropriate:1 ney:2 vullo:1 alternative:1 original:1 denotes:1 running:2 include:5 ensure:1 porteous:1 graphical:4 linguistics:1 hinge:2 exploit:3 build:1 objective:1 malik:1 question:2 quantity:1 parametric:1 strategy:2 dependence:1 gradient:9 distance:2 link:1 thank:1 capacity:2 w0:2 polytope:1 tseng:3 assuming:1 length:3 code:2 index:1 minimizing:1 setup:1 october:1 ekonomika:1 fe:5 yjk:6 negative:3 anal:1 implementation:1 perform:1 allowing:1 upper:2 markov:13 disulphide:1 t:1 relational:1 incorporated:2 precise:1 y1:2 arbitrary:1 canada:1 pair:8 sentence:19 z1:2 smo:1 diction:1 hour:2 nip:4 trans:1 able:1 beyond:1 usually:2 below:4 pattern:1 regime:1 program:3 including:2 memory:2 max:10 suitable:1 natural:4 rely:1 force:1 abbreviate:2 scheme:3 occupancy:1 julien:3 axis:2 negativity:1 embodied:1 text:1 l2:1 relative:2 loss:15 sublinear:1 interesting:1 versus:1 validation:6 degree:1 sufficient:1 share:1 translation:5 ibm:1 supported:1 free:1 english:3 hebert:1 zc:2 exponentiated:3 lle:1 perceptron:6 ghz:2 regard:1 dimension:1 calculated:1 heitz:1 computes:1 collection:1 correlate:1 sj:4 compact:1 corpus:1 belongie:1 discriminative:6 xi:9 continuous:2 decomposes:1 learn:1 zk:6 ca:3 expanding:1 posing:1 complex:1 necessarily:1 kindly:1 linearly:1 bilingual:1 paul:1 prep:1 allowed:1 x1:1 augmented:3 fig:4 edmonton:1 position:3 pereira:1 deterministically:1 comput:1 candidate:1 exercise:1 answering:1 coling:1 hw:1 explored:1 zpi:1 gupta:1 ments:1 evidence:1 intractable:2 essential:1 exists:1 workshop:1 sequential:2 effectively:1 valiant:1 ci:4 phd:1 margin:11 jota:1 intersection:1 simply:1 saddle:6 subfamily:1 univariate:1 nserc:1 tracking:1 springer:1 conditional:5 goal:2 formulated:4 identity:2 marked:1 feasible:2 typical:1 extragradient:18 tack:1 principal:1 duality:1 schrijver:1 occluding:1 select:1 puzicha:1 support:1 scan:6 armijo:1 z0i:1 arises:1 collins:2 seheult:1 |
1,975 | 2,795 | Distance Metric Learning for Large Margin
Nearest Neighbor Classification
Kilian Q. Weinberger, John Blitzer and Lawrence K. Saul
Department of Computer and Information Science, University of Pennsylvania
Levine Hall, 3330 Walnut Street, Philadelphia, PA 19104
{kilianw, blitzer, lsaul}@cis.upenn.edu
Abstract
We show how to learn a Mahanalobis distance metric for k-nearest neighbor (kNN) classification by semidefinite programming. The metric is
trained with the goal that the k-nearest neighbors always belong to the
same class while examples from different classes are separated by a large
margin. On seven data sets of varying size and difficulty, we find that
metrics trained in this way lead to significant improvements in kNN
classification?for example, achieving a test error rate of 1.3% on the
MNIST handwritten digits. As in support vector machines (SVMs), the
learning problem reduces to a convex optimization based on the hinge
loss. Unlike learning in SVMs, however, our framework requires no
modification or extension for problems in multiway (as opposed to binary) classification.
1
Introduction
The k-nearest neighbors (kNN) rule [3] is one of the oldest and simplest methods for pattern
classification. Nevertheless, it often yields competitive results, and in certain domains,
when cleverly combined with prior knowledge, it has significantly advanced the state-ofthe-art [1, 14]. The kNN rule classifies each unlabeled example by the majority label among
its k-nearest neighbors in the training set. Its performance thus depends crucially on the
distance metric used to identify nearest neighbors.
In the absence of prior knowledge, most kNN classifiers use simple Euclidean distances
to measure the dissimilarities between examples represented as vector inputs. Euclidean
distance metrics, however, do not capitalize on any statistical regularities in the data that
might be estimated from a large training set of labeled examples.
Ideally, the distance metric for kNN classification should be adapted to the particular
problem being solved. It can hardly be optimal, for example, to use the same distance metric for face recognition as for gender identification, even if in both tasks, distances are computed between the same fixed-size images. In fact, as shown by many researchers [2, 6, 7, 8, 12, 13], kNN classification can be significantly improved by learning
a distance metric from labeled examples. Even a simple (global) linear transformation of
input features has been shown to yield much better kNN classifiers [7, 12]. Our work builds
in a novel direction on the success of these previous approaches.
In this paper, we show how to learn a Mahanalobis distance metric for kNN classification.
The metric is optimized with the goal that k-nearest neighbors always belong to the same
class while examples from different classes are separated by a large margin. Our goal for
metric learning differs in a crucial way from those of previous approaches that minimize the
pairwise distances between all similarly labeled examples [12, 13, 17]. This latter objective
is far more difficult to achieve and does not leverage the full power of kNN classification,
whose accuracy does not require that all similarly labeled inputs be tightly clustered.
Our approach is largely inspired by recent work on neighborhood component analysis [7]
and metric learning by energy-based models [2]. Though based on the same goals, however,
our methods are quite different. In particular, we are able to cast our optimization as an
instance of semidefinite programming. Thus the optimization we propose is convex, and
its global minimum can be efficiently computed.
Our approach has several parallels to learning in support vector machines (SVMs)?most
notably, the goal of margin maximization and a convex objective function based on the
hinge loss. In light of these parallels, we describe our approach as large margin nearest
neighbor (LMNN) classification. Our framework can be viewed as the logical counterpart
to SVMs in which kNN classification replaces linear classification.
Our framework contrasts with classification by SVMs, however, in one intriguing respect:
it requires no modification for problems in multiway (as opposed to binary) classification. Extensions of SVMs to multiclass problems typically involve combining the results
of many binary classifiers, or they require additional machinery that is elegant but nontrivial [4]. In both cases the training time scales at least linearly in the number of classes.
By contrast, our learning problem has no explicit dependence on the number of classes.
2
Model
Let {(~xi , yi )}ni=1 denote a training set of n labeled examples with inputs ~xi ? Rd and discrete (but not necessarily binary) class labels yi . We use the binary matrix yij ? {0, 1} to
indicate whether or not the labels yi and yj match. Our goal is to learn a linear transformation L : Rd ? Rd , which we will use to compute squared distances as:
D(~xi , ~xj ) = kL(~xi ? ~xj )k2 .
(1)
Specifically, we want to learn the linear transformation that optimizes kNN classification
when distances are measured in this way. We begin by developing some useful terminology.
Target neighbors
In addition to the class label yi , for each input ~xi we also specify k ?target? neighbors?
that is, k other inputs with the same label yi that we wish to have minimal distance to ~xi ,
as computed by eq. (1). In the absence of prior knowledge, the target neighbors can simply
be identified as the k nearest neighbors, determined by Euclidean distance, that share the
same label yi . (This was done for all the experiments in this paper.) We use ?ij ? {0, 1} to
indicate whether input ~xj is a target neighbor of input ~xi . Like the binary matrix yij , the
matrix ?ij is fixed and does not change during learning.
Cost function
Our cost function over the distance metrics parameterized by eq. (1) has two competing
terms. The first term penalizes large distances between each input and its target neighbors,
while the second term penalizes small distances between each input and all other inputs
that do not share the same label. Specifically, the cost function is given by:
X
X
?(L) =
?ij kL(~xi ?~xj )k2 + c
?ij (1?yil ) 1 + kL(~xi ?~xj )k2 ?kL(~xi ?~xl )k2 + ,
ij
ijl
(2)
where in the second term [z]+ = max(z, 0) denotes the standard hinge loss and c > 0 is
some positive constant (typically set by cross validation). Note that the first term only
penalizes large distances between inputs and target neighbors, not between all similarly
labeled examples.
Large margin
The second term in the cost function incorporates the idea of a margin. In particular, for each input ~xi , the hinge loss
is incurred by differently labeled inputs
whose distances do not exceed, by one
absolute unit of distance, the distance
from input ~xi to any of its target neighbors. The cost function thereby favors
distance metrics in which differently labeled inputs maintain a large margin of
distance and do not threaten to ?invade?
each other?s neighborhoods. The learning dynamics induced by this cost function are illustrated in Fig. 1 for an input
with k = 3 target neighbors.
BEFORE
margin
AFTER
local neighborhood margin
!
xi
!
xi
Similarly labeled
Differently labeled
target neighbor
Differently labeled
Figure 1: Schematic illustration of one input?s
neighborhood ~xi before training (left) versus
after training (right). The distance metric is optimized so that: (i) its k = 3 target neighbors lie
within a smaller radius after training; (ii) differently labeled inputs lie outside this smaller radius, with a margin of at least one unit distance.
Arrows indicate the gradients on distances arising from the optimization of the cost function.
Parallels with SVMs
The competing terms in eq. (2) are analogous to those in the cost function for
SVMs [11]. In both cost functions, one
term penalizes the norm of the ?parameter? vector (i.e., the weight vector of the maximum margin hyperplane, or the linear transformation in the distance metric), while the other incurs the hinge loss for examples that
violate the condition of unit margin. Finally, just as the hinge loss in SVMs is only triggered by examples near the decision boundary, the hinge loss in eq. (2) is only triggered by
differently labeled examples that invade each other?s neighborhoods.
Convex optimization
We can reformulate the optimization of eq. (2) as an instance of semidefinite programming [16]. A semidefinite program (SDP) is a linear program with the additional constraint
that a matrix whose elements are linear in the unknown variables is required to be positive semidefinite. SDPs are convex; thus, with this reformulation, the global minimum of
eq. (2) can be efficiently computed. To obtain the equivalent SDP, we rewrite eq. (1) as:
D(~xi , ~xj ) = (~xi ? ~xj )>M(~xi ? ~xj ),
(3)
where the matrix M = L>L, parameterizes the Mahalanobis distance metric induced by
the linear transformation L. Rewriting eq. (2) as an SDP in terms of M is straightforward,
since the first term is already linear in M = L>L and the hinge loss can be ?mimicked? by
introducing slack variables ?ij for all pairs of differently labeled inputs (i.e., for all hi, ji
such that yij = 0). The resulting SDP is given by:
xi ? ~xj )> M(~xi ? ~xj ) + c ij ?ij (1 ? yil )?ijl subject
ij ?ij (~
(~xi ? ~xl )> M(~xi ? ~xl ) ? (~xi ? ~xj )> M(~xi ? ~xj ) ? 1 ? ?ijl
Minimize
P
P
to:
(1)
(2) ?ijl ? 0
(3) M 0.
The last constraint M 0 indicates that the matrix M is required to be positive semidefinite. While this SDP can be solved by standard online packages, general-purpose solvers
tend to scale poorly in the number of constraints. Thus, for our work, we implemented our
own special-purpose solver, exploiting the fact that most of the slack variables {?ij } never
attain positive values1 . The slack variables {?ij } are sparse because most labeled inputs are
well separated; thus, their resulting pairwise distances do not incur the hinge loss, and we
obtain very few active constraints. Our solver was based on a combination of sub-gradient
descent in both the matrices L and M, the latter used mainly to verify that we had reached
the global minimum. We projected updates in M back onto the positive semidefinite cone
after each step. Alternating projection algorithms provably converge [16], and in this case
our implementation worked much faster than generic solvers2 .
3
Results
We evaluated the algorithm in the previous section on seven data sets of varying size and
difficulty. Table 1 compares the different data sets. Principal components analysis (PCA)
was used to reduce the dimensionality of image, speech, and text data, both to speed up
training and avoid overfitting. Except for Isolet and MNIST, all of the experimental results
are averaged over several runs of randomly generated 70/30 splits of the data. Isolet and
MNIST have pre-defined training/test splits. For the other data sets, we randomly generated 70/30 splits for each run. Both the number of target neighbors (k) and the weighting
parameter (c) in eq. (2) were set by cross validation. (For the purpose of cross-validation,
the training sets were further partitioned into training and validation sets.) We begin by
reporting overall trends, then discussing the individual data sets in more detail.
We first compare kNN classification error rates using Mahalanobis versus Euclidean distances. To break ties among different classes, we repeatedly reduced the neighborhood
size, ultimately classifying (if necessary) by just the k = 1 nearest neighbor. Fig. 2 summarizes the main results. Except on the smallest data set (where over-training appears to
be an issue), the Mahalanobis distance metrics learned by semidefinite programming led to
significant improvements in kNN classification, both in training and testing. The training
error rates reported in Fig. 2 are leave-one-out estimates.
We also computed test error rates using a variant of kNN classification, inspired by previous
work on energy-based models [2]. Energy-based classification of a test example ~xt was
done by finding the label that minimizes the cost function in eq. (2). In particular, for
a hypothetical label yt , we accumulated the squared distances to the k nearest neighbors
of ~xt that share the same label in the training set (corresponding to the first term in the
cost function); we also accumulated the hinge loss over all pairs of differently labeled
examples that result from labeling ~xt by yt (corresponding to the second term in the cost
function). Finally, the test example was classified by the hypothetical label that minimized
the combination of these two terms:
yt = argminyt
X
j
?tj kL(~xt?~xj )k2 +c
X
?ij (1?yil ) 1 + kL(~xi ?~xj )k2 ?kL(~xi ?~xl )k2 +
j,i=t?l=t
As shown in Fig. 2, energy-based classification with this assignment rule generally led to
even further reductions in test error rates.
Finally, we compared our results to those of multiclass SVMs [4]. On each data set (except
MNIST), we trained multiclass SVMs using linear and RBF kernels; Fig. 2 reports the
results of the better classifier. On MNIST, we used a non-homogeneous polynomial kernel
of degree four, which gave us our best results. (See also [9].)
1
A great speedup can be achieved by solving an SDP that only monitors a fraction of the margin
conditions, then using the resulting solution as a starting point for the actual SDP of interest.
2
A matlab implementation is currently available at http://www.seas.upenn.edu/?kilianw/lmnn.
Iris
106
44
3
4
4
5278
113
2s
100
examples (train)
examples (test)
classes
input dimensions
features after PCA
constraints
active constraints
CPU time (per run)
runs
Wine
126
52
3
13
13
7266
1396
8s
100
Faces
280
120
40
1178
30
78828
7665
7s
100
Bal
445
90
3
4
4
76440
3099
13s
100
Isolet
6238
1559
26
617
172
37 Mil
45747
11m
1
News
16000
2828
20
30000
200
164 Mil
732359
1.5h
10
MNIST
60000
10000
10
784
164
3.3 Bil
243596
4h
1
Table 1: Properties of data sets and experimental parameters for LMNN classification.
1.9
1.2
20.0
MNIST
2.1
1.7
1.3
1.2
17.6
13.4
13.0
12.4
NEWS
11.0
9.4
ISOLET
4.7
14.1
4.7
3.7
3.3
BAL
10.0
8.2
0.3
30.0
1.1
4.3
3.5
training error rate (%)
FACES
2.6
2.7
2.2
WINE
2.6
2.7
IRIS
8.6
9.7
8.4
7.8
5.9
kNN Euclidean
distance
kNN Mahalanobis
distance
Energy based
classification
14.4
Multiclass
SVM
30.1
19.0
4.3
4.7
5.8
4.4
testing error rate (%)
Figure 2: Training and test error rates for kNN classification using Euclidean versus Mahalanobis distances. The latter yields lower test error rates on all but the smallest data set
(presumably due to over-training). Energy-based classification (see text) generally leads to
further improvement. The results approach those of state-of-the-art multiclass SVMs.
Small data sets with few classes
The wine, iris, and balance data sets are small data sets, with less than 500 training examples and just three classes, taken from the UCI Machine Learning Repository3 . On data
sets of this size, a distance metric can be learned in a matter of seconds. The results in
Fig. 2 were averaged over 100 experiments with different random 70/30 splits of each data
set. Our results on these data sets are roughly comparable (i.e., better in some cases, worse
in others) to those of neighborhood component analysis (NCA) and relevant component
analysis (RCA), as reported in previous work [7].
Face recognition
The AT&T face recognition data set4 contains 400 grayscale images of 40 individuals in
10 different poses. We downsampled the images from to 38 ? 31 pixels and used PCA to
obtain 30-dimensional eigenfaces [15]. Training and test sets were created by randomly
sampling 7 images of each person for training and 3 images for testing. The task involved
40-way classification?essentially, recognizing a face from an unseen pose. Fig. 2 shows
the improvements due to LMNN classification. Fig. 3 illustrates the improvements more
graphically by showing how the k = 3 nearest neighbors change as a result of learning a
Mahalanobis metric. (Though the algorithm operated on low dimensional eigenfaces, for
clarity the figure shows the rescaled images.)
3
4
Available at http://www.ics.uci.edu/?mlearn/MLRepository.html.
Available at http://www.uk.research.att.com/facedatabase.html
Test Image:
Among 3 nearest neighbors
after but not before training:
Among 3 nearest neighbors
before but not after training:
Figure 3: Images from the AT&T face recognition data base. Top row: an image correctly
recognized by kNN classification (k = 3) with Mahalanobis distances, but not with Euclidean distances. Middle row: correct match among the k = 3 nearest neighbors according
to Mahalanobis distance, but not Euclidean distance. Bottom row: incorrect match among
the k = 3 nearest neighbors according to Euclidean distance, but not Mahalanobis distance.
Spoken letter recognition
The Isolet data set from UCI Machine Learning Repository has 6238 examples and 26
classes corresponding to letters of the alphabet. We reduced the input dimensionality (originally at 617) by projecting the data onto its leading 172 principal components?enough
to account for 95% of its total variance. On this data set, Dietterich and Bakiri report test
error rates of 4.2% using nonlinear backpropagation networks with 26 output units (one per
class) and 3.3% using nonlinear backpropagation networks with a 30-bit error correcting
code [5]. LMNN with energy-based classification obtains a test error rate of 3.7%.
Text categorization
The 20-newsgroups data set consists of posted articles from 20 newsgroups, with roughly
1000 articles per newsgroup. We used the 18828-version of the data set5 which has crosspostings removed and some headers stripped out. We tokenized the newsgroups using the
rainbow package [10]. Each article was initially represented by the weighted word-counts
of the 20,000 most common words. We then reduced the dimensionality by projecting the
data onto its leading 200 principal components. The results in Fig. 2 were obtained by averaging over 10 runs with 70/30 splits for training and test data. Our best result for LMMN
on this data set at 13.0% test error rate improved significantly on kNN classification using
Euclidean distances. LMNN also performed comparably to our best multiclass SVM [4],
which obtained a 12.4% test error rate using a linear kernel and 20000 dimensional inputs.
Handwritten digit recognition
The MNIST data set of handwritten digits6 has been extensively benchmarked [9]. We
deskewed the original 28?28 grayscale images, then reduced their dimensionality by retaining only the first 164 principal components (enough to capture 95% of the data?s overall
variance). Energy-based LMNN classification yielded a test error rate at 1.3%, cutting the
baseline kNN error rate by over one-third. Other comparable benchmarks [9] (not exploiting additional prior knowledge) include multilayer neural nets at 1.6% and SVMs at 1.2%.
Fig. 4 shows some digits whose nearest neighbor changed as a result of learning, from a
mismatch using Euclidean distance to a match using Mahanalobis distance.
4
Related Work
Many researchers have attempted to learn distance metrics from labeled examples. We
briefly review some recent methods, pointing out similarities and differences with our work.
5
6
Available at http://people.csail.mit.edu/jrennie/20Newsgroups/
Available at http://yann.lecun.com/exdb/mnist/
Test Image:
Nearest neighbor
after training:
Nearest neighbor
before training:
Figure 4: Top row: Examples of MNIST images whose nearest neighbor changes during training. Middle row: nearest neighbor after training, using the Mahalanobis distance
metric. Bottom row: nearest neighbor before training, using the Euclidean distance metric.
Xing et al [17] used semidefinite programming to learn a Mahalanobis distance metric
for clustering. Their algorithm aims to minimize the sum of squared distances between
similarly labeled inputs, while maintaining a lower bound on the sum of distances between
differently labeled inputs. Our work has a similar basis in semidefinite programming, but
differs in its focus on local neighborhoods for kNN classification.
Shalev-Shwartz et al [12] proposed an online learning algorithm for learning a Mahalanobis
distance metric. The metric is trained with the goal that all similarly labeled inputs have
small pairwise distances (bounded from above), while all differently labeled inputs have
large pairwise distances (bounded from below). A margin is defined by the difference of
these thresholds and induced by a hinge loss function. Our work has a similar basis in its
appeal to margins and hinge loss functions, but again differs in its focus on local neighborhoods for kNN classification. In particular, we do not seek to minimize the distance
between all similarly labeled inputs, only those that are specified as neighbors.
Goldberger et al [7] proposed neighborhood component analysis (NCA), a distance metric
learning algorithm especially designed to improve kNN classification. The algorithm minimizes the probability of error under stochastic neighborhood assignments using gradient
descent. Our work shares essentially the same goals as NCA, but differs in its construction
of a convex objective function.
Chopra et al [2] recently proposed a framework for similarity metric learning in which
the metrics are parameterized by pairs of identical convolutional neural nets. Their cost
function penalizes large distances between similarly labeled inputs and small distances
between differently labeled inputs, with penalties that incorporate the idea of a margin.
Our work is based on a similar cost function, but our metric is parameterized by a linear
transformation instead of a convolutional neural net. In this way, we obtain an instance of
semidefinite programming.
Relevant component analysis (RCA) constructs a Mahalanobis distance metric from a
weighted sum of in-class covariance matrices [13]. It is similar to PCA and linear discriminant analysis (but different from our approach) in its reliance on second-order statistics.
Hastie and Tibshirani [?] and Domeniconi et al [6] consider schemes for locally adaptive
distance metrics that vary throughout the input space. The latter work appeals to the goal
of margin maximization but otherwise differs substantially from our approach. In particular, Domeniconi et al [6] suggest to use the decision boundaries of SVMs to induce a
locally adaptive distance metric for kNN classification. By contrast, our approach (though
similarly named) does not involve the training of SVMs.
5
Discussion
In this paper, we have shown how to learn Mahalanobis distance metrics for kNN classification by semidefinite programming. Our framework makes no assumptions about the
structure or distribution of the data and scales naturally to large number of classes. Ongoing
work is focused in three directions. First, we are working to apply LMNN classification to
problems with hundreds or thousands of classes, where its advantages are most apparent.
Second, we are investigating the kernel trick to perform LMNN classification in nonlinear feature spaces. As LMMN already yields highly nonlinear decision boundaries in the
original input space, however, it is not obvious that ?kernelizing? the algorithm will lead to
significant further improvement. Finally, we are extending our framework to learn locally
adaptive distance metrics [6, 8] that vary across the input space. Such metrics should lead
to even more flexible and powerful large margin classifiers.
References
[1] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape
contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 24(4):509?
522, 2002.
[2] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similiarty metric discriminatively, with application to face verification. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR-05), San Diego, CA, 2005.
[3] T. Cover and P. Hart. Nearest neighbor pattern classification. In IEEE Transactions in Information Theory, IT-13, pages 21?27, 1967.
[4] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based
vector machines. Journal of Machine Learning Research, 2:265?292, 2001.
[5] T. G. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output
codes. In Journal of Artificial Intelligence Research, number 2 in 263-286, 1995.
[6] C. Domeniconi, D. Gunopulos, and J. Peng. Large margin nearest neighbor classifiers. IEEE
Transactions on Neural Networks, 16(4):899?909, 2005.
[7] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing
Systems 17, pages 513?520, Cambridge, MA, 2005. MIT Press.
[8] T. Hastie and R. Tibshirani. Discriminant adaptive nearest neighbor classification. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 18:607?616, 1996.
[9] Y. LeCun, L. Jackel, L. Bottou, A. Brunot, C. Cortes, J. Denker, H. Drucker, I. Guyon,
U. Muller, E. Sackinger, P. Simard, and V. Vapnik. A comparison of learning algorithms for
handwritten digit recognition. In F.Fogelman and P.Gallinari, editors, Proceedings of the 1995
International Conference on Artificial Neural Networks (ICANN-95), pages 53?60, Paris, 1995.
[10] A. K. McCallum. Bow: A toolkit for statistical language modeling, text retrieval, classification
and clustering. http://www.cs.cmu.edu/ mccallum/bow, 1996.
[11] B. Sch?olkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, 2002.
[12] S. Shalev-Shwartz, Y. Singer, and A. Y. Ng. Online and batch learning of pseudo-metrics. In
Proceedings of the 21st International Conference on Machine Learning, Banff, Canada, 2004.
[13] N. Shental, T. Hertz, D. Weinshall, and M. Pavel. Adjustment learning and relevant component
analysis. In Proceedings of the Seventh European Conference on Computer Vision (ECCV-02),
volume 4, pages 776?792, London, UK, 2002. Springer-Verlag.
[14] P. Y. Simard, Y. LeCun, and J. Decker. Efficient pattern recognition using a new transformation
distance. In Advances in Neural Information Processing Systems, volume 6, pages 50?58, San
Mateo, CA, 1993. Morgan Kaufman.
[15] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience,
3(1):71?86, 1991.
[16] L. Vandenberghe and S. P. Boyd. Semidefinite programming. SIAM Review, 38(1):49?95,
March 1996.
[17] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance metric learning, with application
to clustering with side-information. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors,
Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press.
| 2795 |@word repository:1 version:1 middle:2 polynomial:1 norm:1 briefly:1 seek:1 crucially:1 covariance:1 set5:1 pavel:1 incurs:1 thereby:1 reduction:1 contains:1 att:1 com:2 goldberger:2 intriguing:1 john:1 shape:2 designed:1 update:1 intelligence:3 oldest:1 mccallum:2 banff:1 incorrect:1 consists:1 pairwise:4 peng:1 notably:1 upenn:2 roughly:2 sdp:7 inspired:2 lmnn:9 salakhutdinov:1 actual:1 cpu:1 solver:3 begin:2 classifies:1 bounded:2 weinshall:1 benchmarked:1 kaufman:1 minimizes:2 substantially:1 spoken:1 finding:1 transformation:7 pseudo:1 hypothetical:2 tie:1 classifier:6 k2:7 uk:2 gallinari:1 unit:4 positive:5 before:6 walnut:1 local:3 gunopulos:1 pami:2 might:1 mateo:1 averaged:2 nca:3 lecun:4 yj:1 testing:3 differs:5 backpropagation:2 digit:4 significantly:3 attain:1 projection:1 boyd:1 matching:1 induce:1 pre:1 word:2 downsampled:1 suggest:1 onto:3 unlabeled:1 context:1 www:4 equivalent:1 yt:3 straightforward:1 graphically:1 starting:1 convex:6 focused:1 hadsell:1 correcting:2 rule:3 isolet:5 vandenberghe:1 analogous:1 target:11 construction:1 diego:1 programming:9 homogeneous:1 pa:1 element:1 trend:1 recognition:11 trick:1 mahanalobis:3 labeled:24 bottom:2 levine:1 solved:2 capture:1 thousand:1 news:2 kilian:1 russell:1 rescaled:1 removed:1 ideally:1 dynamic:1 ultimately:1 trained:4 rewrite:1 solving:2 incur:1 basis:2 differently:11 represented:2 alphabet:1 train:1 separated:3 describe:1 london:1 artificial:2 labeling:1 neighborhood:11 outside:1 header:1 shalev:2 whose:5 quite:1 apparent:1 cvpr:1 otherwise:1 favor:1 statistic:1 knn:26 unseen:1 online:3 triggered:2 advantage:1 net:3 propose:1 uci:3 combining:1 relevant:3 bow:2 poorly:1 achieve:1 roweis:1 olkopf:1 exploiting:2 regularity:1 extending:1 sea:1 categorization:1 leave:1 object:1 blitzer:2 pose:2 measured:1 nearest:25 ij:13 eq:10 implemented:1 c:1 kilianw:2 indicate:3 direction:2 radius:2 correct:1 stochastic:1 require:2 clustered:1 yij:3 extension:2 hall:1 ic:1 great:1 lawrence:1 presumably:1 algorithmic:1 pointing:1 vary:2 smallest:2 wine:3 purpose:3 label:11 currently:1 jackel:1 weighted:2 mit:4 always:2 aim:1 avoid:1 varying:2 mil:2 focus:2 improvement:6 indicates:1 mainly:1 contrast:3 baseline:1 facedatabase:1 accumulated:2 typically:2 initially:1 lsaul:1 provably:1 pixel:1 overall:2 classification:40 among:6 issue:1 html:2 flexible:1 retaining:1 fogelman:1 art:2 special:1 construct:1 never:1 ng:2 sampling:1 identical:1 capitalize:1 minimized:1 report:2 others:1 few:2 bil:1 randomly:3 tightly:1 individual:2 set4:1 maintain:1 interest:1 highly:1 semidefinite:13 light:1 operated:1 tj:1 necessary:1 machinery:1 euclidean:12 penalizes:5 minimal:1 instance:3 modeling:1 cover:1 assignment:2 maximization:2 cost:14 introducing:1 hundred:1 recognizing:1 seventh:1 reported:2 combined:1 person:1 st:1 international:2 siam:1 csail:1 squared:3 again:1 opposed:2 worse:1 cognitive:1 simard:2 leading:2 account:1 matter:1 depends:1 performed:1 break:1 reached:1 competitive:1 xing:2 parallel:3 minimize:4 ni:1 accuracy:1 convolutional:2 variance:2 largely:1 efficiently:2 yield:4 ofthe:1 identify:1 handwritten:4 identification:1 sdps:1 comparably:1 researcher:2 classified:1 mlearn:1 energy:8 involved:1 turk:1 obvious:1 naturally:1 logical:1 knowledge:4 dimensionality:4 back:1 appears:1 originally:1 specify:1 improved:2 wei:1 done:2 though:3 evaluated:1 just:3 smola:1 working:1 sackinger:1 nonlinear:4 invade:2 dietterich:3 verify:1 counterpart:1 regularization:1 alternating:1 illustrated:1 mahalanobis:14 during:2 mlrepository:1 iris:3 bal:2 ijl:4 exdb:1 image:13 novel:1 recently:1 common:1 ji:1 yil:3 volume:2 belong:2 significant:3 cambridge:3 rd:3 similarly:9 multiway:2 language:1 had:1 jrennie:1 toolkit:1 similarity:2 base:1 own:1 recent:2 optimizes:1 certain:1 verlag:1 binary:6 success:1 discussing:1 yi:6 muller:1 morgan:1 minimum:3 additional:3 recognized:1 converge:1 ii:1 full:1 violate:1 reduces:1 match:4 faster:1 cross:3 retrieval:1 hart:1 schematic:1 variant:1 multilayer:1 essentially:2 metric:40 vision:2 cmu:1 kernel:6 achieved:1 addition:1 want:1 crucial:1 sch:1 unlike:1 induced:3 subject:1 elegant:1 tend:1 incorporates:1 jordan:1 near:1 leverage:1 values1:1 exceed:1 split:5 enough:2 chopra:2 newsgroups:4 xj:14 gave:1 pennsylvania:1 competing:2 identified:1 hastie:2 reduce:1 idea:2 parameterizes:1 multiclass:8 drucker:1 whether:2 pca:4 becker:1 penalty:1 speech:1 hardly:1 repeatedly:1 matlab:1 useful:1 generally:2 involve:2 extensively:1 locally:3 svms:15 simplest:1 reduced:4 http:6 repository3:1 estimated:1 arising:1 per:3 correctly:1 tibshirani:2 neuroscience:1 discrete:1 shental:1 four:1 terminology:1 nevertheless:1 reformulation:1 achieving:1 monitor:1 threshold:1 clarity:1 reliance:1 rewriting:1 fraction:1 cone:1 sum:3 run:5 package:2 parameterized:3 letter:2 powerful:1 named:1 reporting:1 throughout:1 guyon:1 yann:1 decision:3 summarizes:1 comparable:2 bit:1 bound:1 hi:1 replaces:1 yielded:1 nontrivial:1 adapted:1 constraint:6 worked:1 speed:1 speedup:1 department:1 developing:1 according:2 combination:2 march:1 cleverly:1 smaller:2 across:1 hertz:1 partitioned:1 modification:2 brunot:1 projecting:2 rca:2 taken:1 slack:3 count:1 singer:2 available:5 apply:1 denker:1 generic:1 neighbourhood:1 kernelizing:1 mimicked:1 weinberger:1 batch:1 original:2 denotes:1 top:2 include:1 clustering:3 hinge:12 maintaining:1 ghahramani:1 build:1 especially:1 bakiri:2 objective:3 malik:1 already:2 dependence:1 gradient:3 distance:66 street:1 majority:1 seven:2 discriminant:2 tokenized:1 code:2 illustration:1 reformulate:1 balance:1 difficult:1 implementation:3 unknown:1 perform:1 benchmark:1 descent:2 pentland:1 hinton:1 canada:1 cast:1 required:2 kl:7 pair:3 optimized:2 specified:1 paris:1 learned:2 able:1 beyond:1 below:1 pattern:6 mismatch:1 program:2 max:1 power:1 difficulty:2 advanced:1 scheme:1 improve:1 created:1 philadelphia:1 text:4 prior:4 review:2 loss:12 discriminatively:1 versus:3 validation:4 incurred:1 degree:1 verification:1 article:3 editor:3 classifying:1 share:4 row:6 eccv:1 changed:1 last:1 side:1 neighbor:37 saul:2 face:8 eigenfaces:3 stripped:1 absolute:1 sparse:1 boundary:3 dimension:1 rainbow:1 adaptive:4 projected:1 san:2 far:1 transaction:4 obtains:1 cutting:1 global:4 active:2 overfitting:1 investigating:1 belongie:1 xi:26 shwartz:2 grayscale:2 table:2 learn:8 ca:2 bottou:2 necessarily:1 posted:1 european:1 domain:1 icann:1 main:1 decker:1 linearly:1 arrow:1 fig:10 sub:1 explicit:1 wish:1 xl:4 lie:2 weighting:1 third:1 xt:4 showing:1 appeal:2 svm:2 cortes:1 deskewed:1 mnist:10 vapnik:1 ci:1 dissimilarity:1 illustrates:1 margin:20 led:2 simply:1 adjustment:1 springer:1 gender:1 ma:3 goal:9 viewed:1 rbf:1 absence:2 change:3 specifically:2 determined:1 except:3 hyperplane:1 averaging:1 principal:4 total:1 domeniconi:3 experimental:2 attempted:1 newsgroup:1 puzicha:1 support:3 people:1 latter:4 crammer:1 ongoing:1 incorporate:1 |
1,976 | 2,796 | Efficient Unsupervised Learning for Localization
and Detection in Object Categories
Nicolas Loeff, Himanshu Arora
ECE Department
University of Illinois at
Urbana-Champaign
Alexander Sorokin, David Forsyth
Computer Science Department
University of Illinois at
Urbana-Champaign
{loeff,harora1}@uiuc.edu
{sorokin2,daf}@uiuc.edu
Abstract
We describe a novel method for learning templates for recognition and
localization of objects drawn from categories. A generative model represents the configuration of multiple object parts with respect to an object
coordinate system; these parts in turn generate image features. The complexity of the model in the number of features is low, meaning our model
is much more efficient to train than comparative methods. Moreover,
a variational approximation is introduced that allows learning to be orders of magnitude faster than previous approaches while incorporating
many more features. This results in both accuracy and localization improvements. Our model has been carefully tested on standard datasets;
we compare with a number of recent template models. In particular, we
demonstrate state-of-the-art results for detection and localization.
1
Introduction
Building appropriate object models is central to object recognition, which is a fundamental
problem in computer vision. Desirable characteristics of a model include good representation of objects, fast and efficient learning algorithms that require as little supervised information as possible. We believe an appropriate representation of an object should allow
for both detection of its presence and localization (?where is it??). So far the quality of
object recognition in the literature has been measured by its detection performance only.
Viola and Jones [1] present a fast object detection system boosting Haar filter responses.
Another effective discriminative approach is that of a bag of keypoints [2, 3]. It is based on
clustering image patches using appearance only, disregarding geometric information. The
performance for detection in this algorithm is among the state of the art. However as no
geometry cues are used during training, features that do not belong to the object can be
incorporated into the object model. This is similar to classic overfitting and typically leads
to problems in object localization.
Weber et. al. [4] represent an object as a constellation of parts. Fergus et. al. [5] extend
the model to account for variability in appearance. The model encodes a template as a
set of feature-generating parts. Each part generates at most one feature. As a result the
complexity is determined by hardness of part-feature assignment. Heuristic search is used
to approximate the solution, but feasible problems are limited to 7 parts with 30 features.
Agarwal and Roth [6] learn using SNoW a classifier on a sparse representation of patches
extracted around interesting points in the image. In [7], Leibe and Schiele use a voting
scheme to predict object configuration from locations of individual patches. Both approaches provide localization, but require manually localizing the objects in training images. Hillel et. al. [8] independently proposed an approach similar to ours. Their model
however has higher learning complexity and inferior detection performance despite being
of discriminative nature.
In this paper, we present a generative probabilistic model for detection and localization of
objects that can be efficiently learnt with minimal supervision. The first crucial property
of the model is that it represents the configuration of multiple object parts with respect to
an unobserved, abstract object root (unlike [9, 10], where an ?object root? is chosen as
one of the visible parts of the object). This simplifies localization and allows our model to
overcome occlusion and errors in feature extraction. The model also becomes symmetric
with respect to visible parts. The second crucial assumption of the model is that a single
part can generate multiple features in the image (or none). This may seem counterintuitive,
but keypoint detectors generally detects several features around interesting areas. This
hypothesis also makes an explicit model for part occlusion unnecessary: instead occlusion
of a part means implicitly that no feature in the image is produced by it.
These assumptions allow us to model all features in the image as being emitted independently conditioned on the object center. As a result the complexity of inference in our
model is linear in the number of parts of the model and the number of features in the image, obviating the exponential complexity of combinatoric assignments in other approaches
[4, 5, 11]. This means our model is much easier than constellation models to train using
Expectation Maximization (EM), which enables the use of more features and more complex models with resulting improvements in both accuracy and localization. Furthermore
we introduce a variational (mean-field) approximation during learning that allows it to be
hundreds of times faster than previous approaches, with no substantial loss of accuracy.
2
Model
Our model of an object category is a template that generates features in the image. Each
image is represented as a set {fj } of F features extracted with the scale-saliency point
detector [13]. Each feature is described by its location and appearance. Feature extraction and representation will be detailed in section 3. As described in the introduction, we hypothesizePthat givenQ
the object center all features are generated independently:
pobj (f1 , .., fF ) =
P
(o
)
c
oc
j p(fj |oc ). The abstract object center - which does not
generate any features - is represented by a hidden random variable oc . For simplicity it
takes values in a discrete grid of size Nx ? Ny inside the image and oc is assumed to be a
priori uniformly distributed in its domain.
Conditioned on the object center, each feature is generated by a mixture of P parts plus a
background part. A set of hidden variables {?ij } represents which part (i) produced feature
PP +1
fj . These variables ?ij then take values {0, 1} restricted to i=1 ?ij = 1. In other words,
?ij = 1 means feature j was produced by part i; each part can produce multiple features,
each feature is produced by only P
one part. The distributionP
of a feature conditioned on the
object center is then p(fj |oc ) = i p(fj , wij = 1|oc ) = i p(fj |wij = 1, oc )?i , where
PP +1
?i is the prior emission probability of part i. ?i is subject to i=1 ?i = 1.
Each part has a location distribution with respect to the object center corresponding to a two
dimensional full covariance Gaussian, piL (x|oc ). The appearance (see section 3 for details)
of a part does not depend on the configuration of the object; we consider two models :
Gaussian Model (G) Appearance piA is modeled as a k dimensional diagonal covariance
Gaussian distribution.
Local Topic Model (LT) Appearance piA is modeled as a multinomial distribution on a
previously learnt k-word image patch dictionary. This can be considered as a
local topic model.
Let ? denote the set of parameters. The complete data likelihood (joint distribution) for
image n in the object model is then,
?
?[oc =o?c ]
?Y
?
Y
[?
=1]
ij
P?obj ({?ij }, oc , {fj }) =
piL (fj |o?c )piA (fj )?i
P (o?c )
(1)
?
?
?
j,i
oc
where [expr] is one if expr is true and zero otherwise. Marginalizing, the probability of
the observed image in the object model is then,
P?obj
({fj }) =
X
oc
P (oc )
(
Y X
j?
)
P (fj ? , ?ij ? = 1|oc )
i
(2)
The background model assumes all features are produced independently, with uniform location on the image. In the G model of appearance, the appearance is modeled with a
k dimensional full covariance matrix Gaussian distribution. In the LT model, we use a
multinomial distribution on the k-word image patch dictionary to model the appearance.
2.1
Learning
The maximum-likelihood solution for the parameters of the above model does not have a
closed form. In order to train the model the parameters are computed numerically using the
approach of [14], minimizing a free-energy Fe associated with the model that is an upper
bound on the negative log-likelihood. Following [14], we denote v = {fj } as the set of
visible and h = {oc , ?ij } as the set of hidden variables. Let DKL be the K-L divergence:
Fe (Q, ?) = DKL
Q(h)P? (h|v) ? log P? (v) =
Z
h
Q(h) log
Q(h)
dh
P? (h, v)
(3)
In this bound, Q(h) can be a simpler approximation of the posterior probability P? (h|v),
that is used to compute estimates and update parameters. Minimizing eq. 3 with respect to
Q and ? under different restrictions, produces a range of algorithms including exact EM,
variational learning and others [14]. Table 2.1 shows sample updates and complexity of
these algorithms and comparison to other relevant work.
The background model is learnt before the object model is trained. As assumed earlier, for
Gaussian appearance model the background appearance model is a single gaussian, whose
mean and variance are estimated as the sample mean and covariance. For the Local Topic
model, the multinomial distribution is estimated as the sample histogram. The model for
background feature location is uniform and does not have any parameters.
EM Learning for the Object model: In the E-step, the set of parameters ? is fixed and
Fe is minimized with respect to Q(h) without restrictions. This is equivalent to computing the actual posteriors in EM [14, 15]. In this case the optimal solution factorizes
as Q(h) = Q(oc )Q(?ij |oc ) = P (oc |v)P (?ij |oc , v). In the M-step, Fe is minimized with
respect to the parameters ? using the current estimate of Q. Due to the conditional independence introduced in the model, inference is tractable and thus the E-step can be computed
efficiently. The overall complexity of inference is O(F P ? Nx Ny ).
Update for ?iL
N/A
Model
Fergus et al.
Model (EM)
?iL ?
(Variational)
?iL
P
P
j
Q(oc ) j Q(?ji |oc ){xL ?oc }
Poc P
P
Q(o
)
Q(?
|o
c
ji c )
n
oc
j
P
P
P
j
Q(?ji )xL ? o Q(oc )oc }
n {P j P
c
P
n
oc Q(oc )
j Q(?ji )
P
?
n
Complexity
FP
Time (F,P)
36 hrs (30, 7)
F P ? N x Ny
3 hrs (50, 30)
F P + N x Ny
3 mins (100, 30)
Table 1: An example of an update, overall complexity and convergence time for our models and [5],
for different number of features per image (F ) and number of parts in the object model (P ). There is
an increase in speed of several orders of magnitude with respect to [5] on similar hardware.
Variational Learning: In this approach a mean field approximation of Q is considered;
in the E-step the parameters ? are fixed and F is minimized with respect to Q under the
restriction that it factorizes as Q(h) = Q(oc )Q(wij ). This corresponds to a decoupling of
location (oc ) and part-feature assignment (wij ) in the approximation (Q) of the posterior
P? (h|v). In the M-step ? is fixed and the free energy Fe is minimized with respect to this
(mean field) version of Q. A comparison between EM and Variational updates of the mean
in location ?iL of a part is shown in table 2.1. The overall complexity of inference is now
O(F P ) + O(Nx Ny ); this represents orders of magnitude of speedup with respect to the
already efficient EM learning. The impact on performance of the variational approximation
is discussed in section 4.
2.2
Detection and localization
For detection of object presence, a natural decision rule is the likelihood ratio test. After the
models are learnt, for each test image P?obj ({fj })/P bg ({fj }) is compared to a threshold to
make the decision. Once the presence of the object is established, the most likely location
is given by the MAP estimate of oc . We assign parts in the model to the object if they exhibit consistent appearance and location. To remove model parts representing background
we use a threshold on the entropy of the appearance distribution for the LT model (the
determinant of the covariance in location for the G model). The MAP estimate of which
features in the image are assigned (marginalizing over the object center) to parts in the
model determines the support of the object. Bounding boxes include all keypoints assigned
to the object and means of all model parts belonging to the object even if no keypoint is
observed to be produced by such part. This explicitly handles occlusion (fig. 1).
3
Experimental setup
The performance of the method depends on the feature detector making consistent extraction in different instances of objects of the same type. We use the scale-saliency interest
point detector proposed in [13]. This method selects regions exhibiting unpredictable characteristics over both location and scale. The F regions with highest saliency over the image
provide the features for learning and recognition. After the keypoints are detected, patches
are extracted around this points and scale-normalized. A SIFT descriptor [16] (without
orientation) is obtained from these patches. For model G, due to the high dimensionality
of resulting space, PCA is performed choosing k = 15 components to represent the appearance of a feature. For model LT, we instead cluster the appearance of features in the
original SIFT space with a gaussian mixture model with k = 250 components and use the
most likely cluster as feature appearance representation.
For all experiments we use P = 30 parts. The number of features is F = 50 for G model
and F = 100 for LT model, Nx ? Ny = 238. We test our approach on the Caltech 5
dataset: faces, motorbikes, airplanes, spotted cats vs. Caltech background and cars rear
2001 vs. cars background [5]. We initialize appearance and location of the parts with P
randomly chosen features from the training set. The stopping criterion is the change in Fe .
Figure 1: Local Topic model for faces, motorbikes and airplanes datasets [5]. In (a) the most likely
location of the object center is plotted as a black circle. With respect to this reference, the spatial
distribution (2D gaussian) of each part associated with the object is plotted in green. In (b) the
centers of all features extracted are depicted. Blue ones are assigned by the model to the object, and
red ones to the background. The bounding box is plotted in blue. Image (c) shows how many features
in the image are assigned to the same part (a property of our model, not shared by [5]): six parts are
chosen, their spatial distribution is plotted (green), and the features assigned to them are depicted in
blue. Eyes (4,5), mouth (3) and left ear (6) have multiple assignments each. For each these parts,
image (d) image shows the best matches in features extracted from the dataset. Note that the local
topic model can learn parts uniform in appearance (i.e. eyes) but also more complex parts (i.e. the
mouth part includes moustaches, beards and chins). The G appearance model and [5] do not have
this property. The images (e) show the robustness of the method in cases with occlusion, missed
detections and one caricature of a face. Images (f) and (g) show plots for motorbikes, and (h) and (i)
for airplanes.
4
Results
Detection: Although we believe that localization is an essential performance criterion, it is
useless if the approach cannot detect objects. Figure 2 depicts equal error rate detection performance for our models and [5, 3, 8]. We can not compare our range of performance (for
train/test splits), shown on the plot, because this data is not available for other approaches.
Our method is robust to initialization (the variance for starting points is negligible compared to train/test split variance). The results show higher detection performance of all our
algorithms compared to the generative model presented in [5]. The local topic (LT) model
performs better than the model presented in [8]. The purely discriminative approach presented in [3] shows higher detection performance with different (?optimal combination?)
features, but performs worse for the features we are using. The LT model showed consistently higher detection performance than the Gaussian (G) model. For both LT and G
models the variational approximations showed similar discriminative power to that of the
respective exact models. Unlike [5, 3], our model currently is not scale invariant. Nevertheless the probabilistic nature of the model allows for some tolerance to scale changes.
In datasets of manageable size, it is inevitable that the background is correlated with the
object. The result is that most modern methods that infer the template form partially supervised data can tend to model some background parts as lying on the object (see figure
4). Doing so tends to increase detection performance. It is reasonable to expect this increase will not persist in the face of a dramatic change in background. One symptom of
this phenomenon (as in classical overfitting) is that methods that detect very well may be
bad at localization, because they cannot separate the object from background. We are able
to avoid this difficulty by predicting object extent conditioned on detection using only a
subset of parts known to have relatively low variance in location or appearance, given the
object center. We do not yet have an estimate of the increase in detection rate resulting
from overfitting. This is a topic of ongoing research. In our opinion, if a method can detect
but performs poorly at localization, the reason may be overfitting.
Localization: Previous work on localization required aligned images (bounding boxes)
or segmentation masks [7, 6]. A novel property of our model is that it learns to localize
the object and determine its spatial extent without supervision. Figure 1 shows learned
models and examples of localization. There is no standard measure to evaluate localization
performance in an unsupervised setting. In such a case, the object center can be learnt at
any position in the image, provided that this position is consistent across all images. We
thus use as our performance measure, the standard deviation of estimated object centers
and bounding boxes (obtained as in ?2.2), after normalizing the estimates of each image to
a coordinate system in which the ground truth bounding box is a unit square (0, 0) ? (1, 1).
As a baseline we use the rectified center of the image. All objects of interest in both
airplane and motorbike datasets are centered in the image. As a result the baseline is a
good predictor of the object center and is hard to beat. However in the faces dataset there is
much more variation in location; then the advantage of our approach becomes clear. Figure
3 shows the scatterplot of normalized object centers and bounding boxes. The table in
figure 2 shows the localization performance results using the proposed metric.
Variational approximation comparison: Unusually for a variational approximation it is
possible to compare it to the exact model; the results are excellent especially for the G
model. This is consistent with our observation that during learning the variational approximation is good in this case (the free energy bound appears tight). On the other hand, for
the LT model, the variational bound is loose during learning and localization performance
is equivalent, but slightly lower than that of exact LT model. This may be explained by the
fact that gaussian appearance model is less flexible then the topic model and thus G model
can better tolerate decoupling of location and appearance.
Airplanes
Motorbikes
100
99
99
98
Faces
Cars rear
100
100
DLc
DLc
98
95
98
LT
DL
G
GV
97
94
90
GV
G
B
B
DL
C
96
96
92
95
90
95
94
G
GV
C
DLc
86
94
93
91
C
88
93
92
G
GV
LT
LV
BL
92
G
94
Model
G GV
94
96
LV
97
GV
96
DL
98
LV
96
LV
LT
LV
LT
97
98
DLc
LT
99
Spotted Cats
B
LT
LV
B
88
LT
BL
84
C
C
90
92
93
86
82
LT
BL
Bbox(%)
Obj. center(%)
vert
horz vert
horz
Faces
8.88
21.88 4.58
16.59
8.64
16.10 4.47
16.10
8.17
13.16 3.92
6.45
7.86
18.62 3.76
11.04
4.50
24.71
Airplanes
19.30 9.09
10.06 4.42
10.37 4.47
Motorbikes
8.41
7.33
4.93
4.65
5.11
2.01
DL
Figure 2: Plots on the left show detection performance on Caltech 5 datasets [5]. Equal error rate
is reported. The original performance of constellation model [5] is denoted by C. We denote by DLc
the performance (best in literature) reported by [3] using an optimal combination of feature types,
and by DL the performance using our features. The performance of [8] is denoted by B. We show
performance for our G model (G), LT model (L) and their variational approximations (GV) and (LV)
respectively. We report median performance (?) over 20 runs and performance range excluding 10%
best and 10% worst runs. On the right we show localization performance for all models on Faces
dataset and performance of the best model (LT) on all datasets. Standard deviation is reported in
percentage units with respect to the ground truth bounding box. For bounding boxes we average the
standard deviation in each direction. BL denotes baseline performance.
Figure 3: The airplane and motorbike datasets are aligned. Thus the image center baseline (b), (d)
performs well there. Our localization performs similarly (a), (c). There is more variation in location
in faces dataset. Scatterplot (f) shows the baseline performance and (g) shows the performance of
our model. (e) shows the bounding boxes computed by our approach (LT model). Object centers and
bounding boxes are rectified using the ground truth bounding boxes (blue). No information about
location or spatial extent of the object is given to the algorithm.
Figure 4: Approaches like [3] do not use geometric constraints during learning. Therefore, correlation between background and object in the dataset is incorporated into the object model. In this
case the ellipses represent the features that are used by the algorithm in [3] to decide the presence
of a face and motorbike (left images taken from [3]). On the other hand, our model (right images)
can estimate the location and support of the object, even though no information about it is provided
during learning. Blue circles represent the features assigned by the model to the face, the red points
are centers of features assigned to background (plot for Local Topic Model).
5
Conclusions and future work
We have presented a novel model for object categories. Our model allows efficient unsupervised learning, bringing the learning time to a few hours for full models and to minutes
for variational approximations. The significant reduction in complexity allows to handle
many more parts and features than comparable algorithms. The detection performance of
our approach compares favorably to the state of the art even when compared to purely discriminative approaches. Also our model is capable of learning the spatial extent of the
objects without supervision, with good results.
This combination of fast learning and ability to localize is required to tackle challenging
problems in computer vision. Among the most interesting applications we see unsupervised
segmentation, learning, detection and localization of multiple object categories, deformable
objects and objects with varying aspects.
References
[1] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. Proc.
of CVPR, pages 511?518, 2001.
[2] G. Csurka, C. Dance, L. Fan, and C. Bray. Visual Categorization with Bags of Keypoints. In
Workshop on Stat. Learning in Comp. Vision, ECCV, pages 1?22, 2004.
[3] G. Dork?o and C. Schmid. Object class recognition using discriminative local features. Submitted to IEEE trans. on PAMI, 2004.
[4] M. Weber, M. Welling, and P. Perona. Unsupervised Learning of Models for Recognition. Proc.
of ECCV (1), pages 18?32, 2000.
[5] R. Fergus, P. Perona, and A. Zisserman. Object Class Recognition by Unsupervised ScaleInvariant Learning. Proc. of CVPR, pages 264?271, 2003.
[6] S. Agarwal and D. Roth. Learning a sparse representation for object detection. In Proc. of
ECCV, volume 4, pages 113?130, Copenhagen, Denmark, May 2002.
[7] B. Leibe, A. Leonardis, and B. Schiele. Combined object categorization and segmentation with
an implicit shape model. In Workshop on Stat. Learning in Comp. Vision, pages 17?32, May
2004.
[8] A. B. Hillel, T. Hertz, and D. Weinshall. Efficient learning of relational object class models. In
Proc. of ICCV, pages 1762?1769, October 2005.
[9] R. Fergus, P. Perona, and A. Zisserman. A sparse object category model for efficient learning
and exhaustive recognition. In Proc. of CVPR, pages 380?387, june 2005.
[10] D. Crandall, P. Felzenszwalb, and D. Huttenlocher. Spatial Priors for Part-Based Recognition
using Statistical Models. In Proc. of CVPR, pages 10?17, 2005.
[11] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training
examples an incremental bayesian approach tested on 101 object categories. In Workshop on
Generative-Model Based Vision, Washington, DC, June 2004.
[12] A. Opelt, M. Fussenegger, A. Pinz, and P. Auer. Generic object recognition with boosting.
Technical Report TR-EMT-2004-01, EMT, TU Graz, Austria, 2004. Submitted to the IEEE
Trans. on PAMI.
[13] T. Kadir and M. Brady. Saliency, Scale and Image Description. IJCV, 45(2):83?105, 2001.
[14] B. Frey and N. Jojic. A Comparison of Algorithms for Inference and Learning in Probabilistic
Graphical Models. IEEE Trans. on PAMI, 27(9):1392?1416, 2005.
[15] R. Neal and G. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other
variants. In M. I. Jordan, editor, Learning in graphical models, pages 355?368. MIT Press,
Cambridge, MA, USA, 1999.
[16] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91?110, 2004.
| 2796 |@word determinant:1 version:1 manageable:1 covariance:5 dramatic:1 tr:1 reduction:1 configuration:4 ours:1 current:1 yet:1 visible:3 shape:1 enables:1 gv:7 remove:1 plot:4 update:5 v:2 generative:5 cue:1 boosting:2 location:19 simpler:1 ijcv:2 inside:1 introduce:1 mask:1 hardness:1 rapid:1 uiuc:2 detects:1 little:1 actual:1 unpredictable:1 becomes:2 provided:2 moreover:1 weinshall:1 unobserved:1 brady:1 voting:1 unusually:1 tackle:1 classifier:1 unit:2 before:1 negligible:1 local:8 frey:1 tends:1 despite:1 pami:3 black:1 plus:1 initialization:1 challenging:1 limited:1 range:3 area:1 cascade:1 vert:2 word:3 cannot:2 restriction:3 equivalent:2 map:2 roth:2 center:19 starting:1 independently:4 simplicity:1 rule:1 counterintuitive:1 classic:1 handle:2 coordinate:2 variation:2 exact:4 hypothesis:1 recognition:10 persist:1 huttenlocher:1 observed:2 worst:1 region:2 graz:1 highest:1 substantial:1 complexity:11 schiele:2 pinz:1 fussenegger:1 trained:1 depend:1 tight:1 purely:2 localization:23 distinctive:1 joint:1 represented:2 cat:2 train:5 fast:3 describe:1 effective:1 detected:1 crandall:1 choosing:1 hillel:2 exhaustive:1 whose:1 heuristic:1 kadir:1 cvpr:4 tested:2 otherwise:1 ability:1 scaleinvariant:1 advantage:1 tu:1 relevant:1 aligned:2 poorly:1 deformable:1 description:1 convergence:1 cluster:2 produce:2 comparative:1 generating:1 categorization:2 incremental:2 object:76 stat:2 measured:1 ij:10 eq:1 exhibiting:1 direction:1 snow:1 filter:1 centered:1 opinion:1 require:2 assign:1 f1:1 lying:1 around:3 considered:2 ground:3 predict:1 dictionary:2 proc:7 bag:2 currently:1 mit:1 gaussian:10 avoid:1 boosted:1 factorizes:2 varying:1 emission:1 june:2 improvement:2 moustache:1 consistently:1 likelihood:4 baseline:5 detect:3 inference:5 rear:2 stopping:1 typically:1 hidden:3 perona:4 wij:4 selects:1 caricature:1 overall:3 among:2 orientation:1 flexible:1 denoted:2 priori:1 art:3 spatial:6 initialize:1 field:3 once:1 equal:2 extraction:3 washington:1 manually:1 represents:4 jones:2 unsupervised:6 inevitable:1 future:1 minimized:4 others:1 report:2 few:2 modern:1 randomly:1 divergence:1 individual:1 pobj:1 geometry:1 occlusion:5 detection:24 interest:2 mixture:2 capable:1 respective:1 circle:2 plotted:4 minimal:1 instance:1 combinatoric:1 earlier:1 localizing:1 assignment:4 maximization:1 pia:3 deviation:3 subset:1 hundred:1 uniform:3 predictor:1 reported:3 learnt:5 combined:1 fundamental:1 probabilistic:3 central:1 ear:1 worse:1 account:1 includes:1 forsyth:1 explicitly:1 bg:1 depends:1 performed:1 root:2 csurka:1 closed:1 view:1 doing:1 lowe:1 red:2 pil:2 square:1 il:4 accuracy:3 variance:4 characteristic:2 efficiently:2 descriptor:1 saliency:4 bayesian:1 produced:6 none:1 comp:2 rectified:2 submitted:2 detector:4 energy:3 pp:2 associated:2 dataset:6 austria:1 car:3 dimensionality:1 segmentation:3 carefully:1 auer:1 appears:1 higher:4 tolerate:1 supervised:2 response:1 zisserman:2 box:11 symptom:1 though:1 furthermore:1 implicit:1 correlation:1 hand:2 quality:1 believe:2 usa:1 building:1 normalized:2 true:1 assigned:7 jojic:1 symmetric:1 neal:1 during:6 inferior:1 oc:30 criterion:2 chin:1 complete:1 demonstrate:1 performs:5 fj:14 image:37 meaning:1 variational:14 novel:3 weber:2 multinomial:3 ji:4 emt:2 volume:1 belong:1 extend:1 discussed:1 numerically:1 significant:1 cambridge:1 grid:1 similarly:1 illinois:2 supervision:3 posterior:3 recent:1 showed:2 caltech:3 dlc:5 determine:1 multiple:6 desirable:1 full:3 infer:1 keypoints:5 champaign:2 technical:1 faster:2 match:1 spotted:2 dkl:2 ellipsis:1 impact:1 variant:1 vision:5 expectation:1 metric:1 histogram:1 represent:4 agarwal:2 background:15 median:1 crucial:2 unlike:2 bringing:1 subject:1 tend:1 seem:1 obj:4 emitted:1 jordan:1 presence:4 split:2 independence:1 simplifies:1 airplane:7 six:1 pca:1 generally:1 detailed:1 clear:1 hardware:1 category:7 generate:3 percentage:1 estimated:3 per:1 blue:5 discrete:1 threshold:2 nevertheless:1 drawn:1 localize:2 run:2 reasonable:1 decide:1 patch:7 missed:1 loeff:2 decision:2 comparable:1 bound:4 fan:1 sorokin:1 bray:1 constraint:1 fei:2 encodes:1 generates:2 aspect:1 speed:1 min:1 dork:1 relatively:1 speedup:1 department:2 combination:3 belonging:1 hertz:1 across:1 slightly:1 em:8 making:1 explained:1 restricted:1 invariant:2 iccv:1 taken:1 previously:1 turn:1 loose:1 tractable:1 available:1 leibe:2 himanshu:1 appropriate:2 generic:1 robustness:1 motorbike:8 original:2 assumes:1 clustering:1 include:2 denotes:1 graphical:2 especially:1 classical:1 expr:2 bl:4 already:1 diagonal:1 exhibit:1 separate:1 nx:4 topic:9 extent:4 reason:1 denmark:1 modeled:3 useless:1 ratio:1 minimizing:2 setup:1 october:1 fe:6 favorably:1 negative:1 upper:1 observation:1 datasets:7 urbana:2 beat:1 viola:2 relational:1 incorporated:2 variability:1 excluding:1 dc:1 hinton:1 introduced:2 david:1 copenhagen:1 required:2 learned:1 established:1 hour:1 trans:3 able:1 leonardis:1 bbox:1 fp:1 including:1 green:2 mouth:2 power:1 natural:1 difficulty:1 haar:1 predicting:1 hr:2 representing:1 scheme:1 keypoint:2 eye:2 arora:1 schmid:1 prior:2 literature:2 geometric:2 marginalizing:2 loss:1 expect:1 interesting:3 lv:7 consistent:4 editor:1 daf:1 eccv:3 free:3 allow:2 opelt:1 template:5 face:11 felzenszwalb:1 sparse:4 distributed:1 tolerance:1 overcome:1 far:1 welling:1 approximate:1 implicitly:1 overfitting:4 assumed:2 unnecessary:1 discriminative:6 fergus:5 search:1 table:4 distributionp:1 learn:2 nature:2 robust:1 nicolas:1 decoupling:2 excellent:1 complex:2 domain:1 bounding:11 obviating:1 fig:1 ff:1 beard:1 depicts:1 ny:6 position:2 explicit:1 exponential:1 xl:2 learns:1 minute:1 bad:1 sift:2 constellation:3 disregarding:1 normalizing:1 dl:5 incorporating:1 essential:1 scatterplot:2 workshop:3 magnitude:3 conditioned:4 justifies:1 easier:1 entropy:1 depicted:2 lt:21 appearance:22 likely:3 visual:2 partially:1 corresponds:1 truth:3 determines:1 extracted:5 dh:1 ma:1 conditional:1 shared:1 feasible:1 change:3 hard:1 determined:1 uniformly:1 ece:1 experimental:1 support:2 alexander:1 ongoing:1 evaluate:1 dance:1 phenomenon:1 correlated:1 |
1,977 | 2,797 | Products of ?Edge-perts?
Peter Gehler
Max Planck Institute for Biological Cybernetics
Spemannstra?e 38, 72076 T?ubingen, Germany
[email protected]
Max Welling
Department of Computer Science
University of California Irvine
[email protected]
Abstract
Images represent an important and abundant source of data. Understanding their statistical structure has important applications such as image
compression and restoration. In this paper we propose a particular kind
of probabilistic model, dubbed the ?products of edge-perts model? to describe the structure of wavelet transformed images. We develop a practical denoising algorithm based on a single edge-pert and show state-ofthe-art denoising performance on benchmark images.
1
Introduction
Images, when represented as a collection of pixel values, exhibit a high degree of redundancy. Wavelet transforms, which capture most of the second order dependencies, form the
basis of many successful image processing applications such as image compression (e.g.
JPEG2000) or image restoration (e.g. wavelet coring). However, the higher order dependencies can not be filtered out by these linear transforms. In particular, the absolute values
of neighboring wavelet coefficients (but not their signs) are mutually dependent. This kind
of dependency is caused by the presence of edges that induce clustering of wavelet activity.
Our philosophy is that by modelling this clustering effect we can potentially improve the
performance of some important image processing tasks.
Our model builds on earlier work in the image processing literature. In particular, the
PoEdges models that we discuss in this paper can be viewed as generalizations of the models proposed in [1] and [2]. The state-of-art in this area is the joint model discussed in [3]
based on the ?Gaussian scale mixture? model (GSM). While the GSM falls in the category of directed graphical models and has a top-down structure, the PoEdges model is best
classified as an (undirected) Markov random field model and follows bottom-up semantics.
The main contributions of this paper are 1) a new model to describe the higher order statistical dependencies among wavelet coefficients (section 2), 2) an efficient estimation procedure to fit the parameters of a single edge-pert model and a new technique to estimate
the wavelet coefficients that participate in each such (local) model (section 3.1) and 3) a
new ?iterated Wiener denoising algorithm? (section 3.2). In section 4 we report on a number of experiments to compare performance of our algorithm with several methods in the
literature and with the GSM-based method in particular.
U?
W = [8.64,8.63], ? = 0.28
?15
upper left component
upper left component
?15
U
?10
?5
0
5
10
?10
U
?5
W
0
5
|Z|?
Z
10
15
?15 ?10
?5
0
5
10
center component
(Ia)
15
15
?15 ?10
?5
0
5
10
center component
(Ib)
Z
15
(IIa)
?
(IIb)
Figure 1: Estimated (Ia) and modelled (Ib) conditional distribution of a wavelet coefficient given
its upper left neighbor. The statistics were collected from the vertical subband at the lowest level of
a Haar filter wavelet decomposition of the ?Lena? image. Note that the ?bow-tie? dependencies are
captured by the PoEdges model. (IIa) Bottom up network interpretation of ?products of edge-perts?
model. (IIb) Top-down generative Gaussian scale mixture model.
2
?Product of Edge-perts?
It has long been recognized in the image processing community that wavelet transforms
form an excellent basis for representation of images. Within the class of linear transforms, it represents a compromise between many conflicting but desirable properties
of image representation such as multi-scale and multi-orientation representation, locality both in space and frequency, and orthogonality resulting in decorrelation. A particularly suitable wavelet transform which forms the basis of the best denoising algorithms today is the over-complete steerable wavelet pyramid [4] freely downloadable from
http://www.cns.nyu.edu/?lcv/software.html. In our experiments we have confirmed that the best
results were obtained using this wavelet pyramid.
In the following we will describe a model for the statistical dependencies between wavelet
coefficients. This model was inspired by recent studies of these dependencies (see e.g.
[1, 5]). It also represents a generalization of the bivariate Laplacian model proposed in
[2]. The probability distribution of the ?product of edge-pert? model (PoEdges) over the
wavelet coefficients z has the following form,
h XX
?i i
1
P (z) = exp ?
Wij |?
aTj z|?j
, ?j > 0, ?i ? (0, 1], Wij ? 0
Z
i
j
where the normalization constant Z depends on all the parameters in the model
?j , ?j , ?i } and where a
? indicates an unit-length vector.
{Wij , a
In figure 2 we show the effect of changing some parameters for a single edge-pert model
(i.e. set i = 1 in Eqn.1 above). The parameters {?j } control the shape of the contours: for
? = 2 we have elliptical contours, for ? = 1 the contours are straight lines while for ? < 1
the contours curve inwards. The parameters {?i } control the rate at which the distribution
decays, i.e. the distance between iso-probability contours. The unit vectors {?
ai } determine
the orientation of basis vectors. If the {?
ai } are axis-aligned (as in figure 2), the distribution
is symmetric w.r.t. reflections of any subset of the {zi } in the origin, which implies that the
wavelet coefficients are necessarily decorrelated (although higher order dependencies may
still remain). Finally, the weights {Wij } model the scale (inverse variance) of the wavelet
coefficients. We mention that it is possible to entertain a larger number of bases vectors
than wavelet coefficients (a so-called ?over-complete basis?), which seems appropriate for
some of the empirical joint histograms shown in [1].
This model describes two important statistical properties which have been observed for
wavelet coefficients: 1) its marginal distributions p(zi ) are peaked and have heavy tails
(high kurtosis) and 2) the conditional distributions p(zi |zj ) display ?bow-tie? dependencies
which are indicative of clustering of wavelet coefficients (neighboring wavelet coefficient
8
8
8
8
6
6
6
6
4
4
4
4
2
2
2
0
0
0
0
?2
?2
?2
?2
?4
?4
?4
?4
?6
?8
?8
?6
?6
?4
?2
0
2
(a)
4
6
8
?8
?8
2
?6
?6
?4
?2
0
(b)
2
4
6
8
?8
?8
?6
?6
?4
?2
0
(c)
2
4
6
8
?8
?8
?6
?4
?2
0
2
4
6
8
(d)
Figure 2: Contour plots for a single edge-pert model with (a) ?1,2 = 0.5, ? = 0.5, (b) ?1,2 =
1, ? = 0.5, (c) ?1,2 = 2, ? = 0.5, (d) ?1,2 = 2, ? = 0.3. For all figures W1 = 1 and W2 = 0.8.
are often active together). This phenomenon is shown in figure 1Ia,b. To better understand
the qualitative behavior of our model we provide the following network interpretation (see
figure 1IIa,b. Input to the model (i.e. the wavelet coefficients) undergo a nonlinear transformation zi ? |zi |?i ? u = W |z|? ? u? . The output of this network, u? , can be
interpreted as a ?penalty? for the input: the larger this penalty is, the more unlikely this
input becomes under the probabilistic model. This process is most naturally understood
[6] as enforcing constraints of the form u = W |z|? ? 0, by penalizing violations of these
constraints with u? .
What is the reason that the PoEdges model captures the clustering of wavelet activities?
Consider a local model describing the statistical structure of a patch of wavelet coefficients
and recall that the weighted sum of these activities is penalized. At a fixed position the
activities are typically very small across images. However, when an edge happens to fall
within the window of the model, most coefficients become active jointly. This ?sparse?
pattern of activity incurs less penalty than for instance the same amount1 of activity distributed equally over all images because of the concave shape of the penalty function, i.e.
(act)? < ( 12 act)? + ( 12 act)? where ?act? is the activity level and ? < 1.
2.1
Related Work
Early wavelet denoising techniques were based on the observation that the marginal distribution of a wavelet coefficient is highly kurtotic (peaked and heavy tails). It was found
that the generalized Gaussian density represents a very good fit to the empirical histograms
[1, 7],
?w
exp [?(w|z|)? ] , ? > 0, w > 0.
(1)
p(z) =
2?( ?1 )
This has lead to the successful wavelet coring and shrinkage methods. A bivariate generalization of that model describing a wavelet coefficient zc and its ?parent? zp at a higher
level in the pyramid jointly, was proposed in [2]. The probability density,
q
w
exp ? w(zc2 + zp2 )
p(zc , zp ) =
(2)
2?
is easily seen to be a special case of the PoEdges model proposed here. This model, unlike the univariate model, captures the bow-tie dependencies described above resulting a
significant gain in denoising performance.
?Gaussian scale mixtures? (GSM) have been proposed to model even larger neighborhoods
of wavelet coefficients. In particular, very good denoising results have been obtained
by including within subband neighborhoods of size 3 ? 3 in addition to the parent of a
wavelet coefficient [3]. A GSM is defined in terms of a precision ?
variable u, the squareroot of which multiplies a multivariate Gaussian variable: z = u y, y ? N [0, ?],
resultingR in the following expression for the distribution over the wavelet coefficients:
p(z) = du Nz [0, u?] p(u). Here, p(u) is the prior distribution for the precision variable.
Hence, the GSM represents an example of a generative model with top-down semantics.
1
We assume the total amount of variance in wavelet activity is fixed in this comparison.
This in contrast to the PoEdges model which is better interpreted as a bottom-up network
with log-probability proportional to its output. This difference is contrasted in figure 1IIa,b.
3
Edge-pert Denoising
Based on the PoEdges model discussed in the previous sections we now introduce a simplified model that forms the basis for a practical denoising algorithm. Recent progress in the
field has indicated that it is important to model the higher order dependencies which exist
between wavelet coefficients [2, 3]. This can be realized through the estimation of a joint
model on a small cluster of wavelet coefficients around each coefficient. Ideally, we would
like to use the full PoEdges model, but training these models from data is cumbersome.
Therefore, in order to keep computations tractable, we proceed with a simplified model,
X
2 ?
p(z) ? exp ?
wj a?j T z
.
(3)
j
Compared to the full PoEdges model we use only one edge-pert and we have set ?j = 2 ?j.
3.1
Model Estimation
Our next task is to estimate the parameters of this model efficiently. We will learn separate models for each wavelet coefficient jointly with a small neighborhood of dependent
coefficients. Each such model is estimated in three steps: I) determine the coefficients that
participate in each model, II) transform each model into a decorrelated domain (this implicitly estimates the {?
aj }) and III) estimate the remaining parameters w, ? in the decorrelated
domain using moment matching. Below we will describe these steps in more detail.
By zi , z?i we will denote the clean and noisy wavelet coefficients respectively. With yi , y?i
we denote the decorrelated clean and noisy wavelet coefficients while ni denotes the
Gaussian noise random variable in the wavelet domain, i.e. z?i = zi + ni . Both due to
the details of the wavelet decomposition and due to the properties of the noise itself we
assume the noise to be correlated and zero mean: E[ni ] = 0, E[ni nj ] = ?ij . In this paper
we further assume that we know the noise covariance in the image domain from which
one can easily compute the noise covariance in the wavelet domain, however only minor
changes are needed to estimate it from the noisy image itself.
Step I: We start with a 7 ? 7 neighborhood from which we will adaptively select the best
candidates to include in the model. In addition, we will always include the parent coefficient in the subband of a coarser scale if it exists (this is done by first up-sampling this
band, see [3]). The coefficients that participate in a model are selected by estimating their
dependencies relative to the center coefficient. Anticipating that (second order) correlations will be removed by sphering we are only interested in higher order dependencies, in
particular dependencies between the variances. The following cumulant is used to obtain
these estimates,
Hcj = E[?
zc2 z?j2 ] ? 2E[?
zc z?j ]2 ? E[?
zc2 ]E[?
zj2 ]
(4)
where c is the center coefficient which will be denoised. The necessary averages E[?] are
computed by collecting samples within each subband, assuming that the statistics are location invariant. It can be shown that this cumulant is invariant under addition of possibly
correlated Gaussian noise, i.e. it?s value is the same for {zi } and {?
zi }. Effectively, we measure the (higher order) dependencies between squared wavelet coefficients after subtraction
of all correlations. Finally, we select the participants of a model centered at coefficient z?c by
ranking the positive Hcj and picking all the ones which satisfy: Hci > 0.7 ? maxj6=c Hcj .
Step II: For each model (with varying number of participants) we estimate the covariance,
Cij = E[zi , zj ] = E[?
zi z?j ] ? ?ij
(5)
and correct it by setting to zero all negative eigenvalues in such a way that the sum of
the eigenvalues is invariant (see [3]). Statistics are again collected by sampling within a
subband. Then, we perform a linear transformation to a new basis onto which ? = I and
C are diagonal. This can be accomplished by the following procedure,
RRT = ?
?
U ?U T = R?1 CR?T
?
? = (RU )?1 z
?.
y
(6)
In this new space (which is different for every wavelet coefficient) we can now assume
?j = ej , the axis aligned basis vector.
a
Step III: In the decorrelated space we estimate the single edge-pert model by moment
matching. The moments of the edge-pert model in this space are easily computed using
Np
hX
i
N + 2`
N
p
p
E (
wj yj2 )` = ?
/ ?
2?
2?
j=1
(7)
where Np is the number of participating coefficients in the model. We note that E[?
yi2 ] =
2
1 + E[yi ]. This leads to the following equation for ?
Np +4
N
Np
Np
Np2 ? 2?
? 2?p
X
yi2 y?j2 ] ? E[?
yi2 ] ? E[?
yj2 ] + 1
E[?
yi4 ] ? 6E[?
yi2 ] + 3 X E[?
=
+
.
2
2
2
2
(E[?
yi ] ? 1)2
(E[?
yi ] ? 1)(E[?
yj ] ? 1)
Np +2
i=1
i6=j
? 2?
(8)
Thus we can estimate ? by a line search and approximate the second term on the right hand
side with Np (Np ? 1) to simplify the calculations. By further noting that the model (Eqn.3)
is symmetric w.r.t. permutations of the variables uj = wj yj2 we find
N
Np +2
/ Np (E[?
yi2 ] ? 1) ? 2?p .
wj = ? 2?
(9)
A common strategy in the wavelet literature is to estimate the averages E[?] by collecting
samples in a local neighborhood around the coefficient under consideration. The advantage
is that the estimates are adapting to the local statistics in the image. We have adopted
this strategy and used a 11 ? 11 box around each coefficient to collect 121 samples in the
decorrelated wavelet domain. Coefficients for which E[?
yi2 ] < 1 are set to zero and removed
from consideration. The estimation of ? depends on the fourth moment and is thus very
sensitive to outliers, which is a commonly known problem with the moment matching
method. We encounter the same problem so whenever we find no estimate of ? in [0, 1]
using Eqn.8 we simply set it to 0.5.
3.2
The Iterated Wiener Filter
To infer a wavelet coefficient given its noisy observation in the decorrelated wavelet domain, we maximize the a posteriori probability of our joint model. This is equivalent to,
z? = argmax log p(?
z|z) + log p(z) .
(10)
z
When we assume Gaussian pixel noise, this translates into,
X
?
?)T K(z ? z
?) +
wj zj2
z? = argmin 12 (z ? z
z
(11)
j
#
? = Jx, K = J #T ??1
where J is the (linear) wavelet transform z
with J # =
n J
(J T J)?1 J T the pseudo-inverse of J (i.e. J # J = I) and ?n the noise covariance matrix. In the decorrelated wavelet domain we simply set K = I.
One can now construct an upper bound on this objective by using,
?
f ? ? ?f + (1 ? ?) ?? ??1
? < 1.
(12)
Lena
Barbara
36
35
34
34
33
32
GSM: 35.59, 33.89, 32.67, 31.68
EP : 35.60, 33.89, 32.62, 31.64
BiV : 35.35, 33.67, 32.40, 31.40
LiOr : 34.96, 33.05, 31.72, 30.64
LM : 34.31, 32.36, 31.01, 29.98
31
30
20
22
24
26
Input PSNR [dB]
28
Output PSNR [dB]
Output PSNR [dB]
35
33
32
31
30
GSM: 34.03, 31.87, 30.31, 29.12
EP : 34.40, 32.32, 30.86, 29.69
BiV : 33.35, 31.31, 29.80, 28.61
LiOr : 33.35, 31.10, 29.44, 28.23
LM : 32.57, 30.19, 28.59, 27.42
29
28
27
20
22
24
26
28
Input PSNR [dB]
Figure 3: Output PSNR as a function of input PSNR for various methods on Lena (left) and Barbara
(right) images. GSM: Gaussian scale mixture (3 ? 3+p)[3], EP: edge-pert, BIV: Bivariate adaptive
shrinkage [2], LiOr: results from [8], LM: 5 ? 5 LAWMAP results from [9]. Dashed lines indicate
results copied from the literature, while solid lines indicate that the values were (re)produced on our
computer.
This bound is saturated for ? = ?f ??1 , and hence we can construct the following iterative
algorithm that is guaranteed to converge to a local minimum,
X
?1
??1
zt+1 = K + Diag[2? t w]
K?
z
?
? t+1 = ?
wj (zjt+1 )2
.
(13)
j
This algorithm has a natural interpretation as an ?iterated Wiener filter? (IWF), since the
first step (left hand side) is an ordinary Wiener filter while the second step (right hand side)
adapts the variance of the filter. A summary of the complete algorithm is provided below.
Edge-pert Denoising Algorithm
1. Decompose image into subbands.
2. For each subband (except low-pass residual):
2i. Determine coefficients participating in joint model by using Eqn.4 (includes parent).
2ii. Compute noise covariance ?.
2iii. Compute signal covariance using Eqn.5.
3. For each coefficient in a subband:
3i. Transform coefficients into the decorrelated domain using Eqn.6.
3ii. Estimate parameters {?, wi } on a local neighborhood using Eqn.8 and Eqn.9.
3iii. Denoise all wavelet coefficients in the neighborhood using IWF from section 3.2.
3iv. Transform denoised cluster back to the wavelet domain and retain the ?center coefficient? only.
4. Reconstruct denoised image by inverting the wavelet transform.
4
Experiments
Denoising experiments were run on the steerable wavelet pyramid with oriented highpass residual bands (FSpyr) using 8 orientations as described in [3]. Results are reported on six images: ?Lena?, ?Barbara?, ?Boat?, ?Fingerprint?, ?House? and ?Peppers? and averaged over 5 experiments. In each experiment an image was artificially
contaminated with independent Gaussian pixel noise of some predetermined variance
and denoised using 20 iterations of the proposed algorithm. To reduce artifacts at the
boundaries we used ?reflective boundary extensions?. The images were obtained from
http://decsai.ugr.es/?javier/denoise/index.html to ensure comparison on the same set of images.
In table 1 we compare performance between the PoEdges and GSM based denoising algorithms on six test images and ten different noise levels. In figure 3 we compare results on
?
Lena
Barbara
Boat
Fingerprint
House
Peppers
EP
GSM
EP
GSM
EP
GSM
EP
GSM
EP
GSM
EP
GSM
1
48.65
48.46
48.70
48.37
48.46
48.44
48.44
48.46
49.06
48.85
48.50
48.38
2
43.53
43.23
43.59
43.29
43.09
42.99
43.02
43.05
44.32
44.07
43.20
43.00
5
38.51
38.49
38.06
37.79
37.05
36.97
36.66
36.68
39.00
38.65
37.40
37.31
10
35.60
35.61
34.40
34.03
33.49
33.58
32.35
32.45
35.54
35.35
33.79
33.77
15
33.89
33.90
32.32
31.86
31.58
31.70
30.02
30.14
33.67
33.64
31.74
31.74
20
32.62
32.66
30.86
30.32
30.28
30.38
28.42
28.60
32.37
32.39
30.29
30.31
25
31.64
31.69
29.69
29.13
29.24
29.37
27.31
27.45
31.33
31.40
29.13
29.21
50
28.58
28.61
26.12
25.48
26.27
26.38
24.15
24.16
28.15
28.26
25.69
25.90
75
26.74
26.84
24.12
23.65
24.64
24.79
22.45
22.40
26.12
26.41
23.85
24.00
100
25.53
25.64
22.90
22.61
23.56
23.75
21.28
21.22
24.84
25.11
22.50
22.66
Table 1: Comparison of image denoising results between PoEdges (EP above) and its closest competitor (GSM). All results are averaged over 5 noise samples. The GSM results are copied from [3].
Details of the PoEdges algorithm are described in main text. Note that PoEdges outperforms GSM
for low noise levels while the GSM performs better at high noise levels. Also, PoEdges performs best
at all noise levels on the Barbara image, while GSM is superior on the boat image.
FSpyr against various methods published in the literature [3, 2, 9] on the images ?Lena?
and ?Barbara?.
These experiments lead to some interesting conclusions. In comparing PoEdges with GSM
the general trend seems to be that PoEdges performs superior at lower noise levels while
the reverse is true for higher noise levels. We observe that the PoEdges give significantly
better results on the ?Barbara? image than any other published method (by a large magin).
According to the findings of the authors of [3]2 this stems mainly from the fact that the
parameters are estimated locally which is particularly suited for this image. Increasing the
estimation window in step 3ii of the algorithm let the denoising results drop down to the
GSM solution (not reported here). Comparing the quality of restored images in detail (as in
figure 3) we conclude that the GSM produces slightly sharper edges at the expense of more
artifacts. Denoising a 512 ? 512 pixel sized image on a pentium 4 2.8GHz PC for our
adaptive neighborhood selection model took 26 seconds for the QMF9 and 440 seconds for
the FSpyr.
We also compared GSM and EP using a separable orthonormal pyramid (QMF9). Using
this simpler orthonormal decomposition we found that the EP model outperforms GSM in
all experiments described above. However the results are significantly inferior because the
wavelet representation plays a prominent role for denoising performance. These results and
our matlab implementation of the algorithm are available online3 .
5
Discussion
We have proposed a general ?product of edge-perts? model to capture the dependency
structure in wavelet coefficients. This was turned into a practical denoising algorithm by
simplifying to a single edge-pert and choosing ?j = 2 ?j. The parameters of this model
can be adapted based on the noisy observation of the image. In comparison with the closest
competitor (GSM [3]) we found superior performance at low noise levels while the reverse
is true for high noise levels. Also, the PoEdges model performs better than any competitor
on the Barbara image, but consistency less well than GSM on the boat image.
The GSM model aims at capturing the same statistical regularities as the PoEdges but using
a very different modelling paradigm: where PoEdges is best interpreted as a bottom-up constraint satisfaction model, the GSM is a causal generative model with top-down semantics.
We have found that these two modelling paradigms exhibit different denoising accuracies
2
3
Personal communication
http://www.kyb.mpg.de/?pgehler
(a)
(b)
(c)
(d)
Figure 4: Comparison between (c) GSM with 3 ? 3+parent [3] (PSNR 29.13) and (d) edge-pert
denoiser with parameter settings as described in the text (PSNR 29.69) on Barbara image (cropped
to 150 ? 150 to enhance artifacts). Noisy image (b) has PSNR 20.17. Although the results turn out
very similar, the GSM seems to be slightly less blurry at the expense of introducing more artifacts.
on some types of images implying an opportunity for further study and improvement.
The model in Eqn.3 can be extended in a number of ways. For example, we can lift the
?j than coefficients or extend the neighborrestriction on ?j = 2, allow more basis-vectors a
hood selection to subbands of different scales and/or orientations. More substantial performance gains are expected if we can extend the single edge-pert case to a multi edge-pert
model. However, approximations in the estimation of these models will become necessary
to keep the denoising algorithm practical. The adaptation of ? relies on empirical estimations of the fourth moment and is therefore very sensitive to outliers. We are currently
investigating more robust estimators to fit ?.
Further performance gains may still be expected through the development of new wavelet
pyramids and through modelling of new dependency structures such as the phenomenon of
phase alignment at the edges.
Acknowledgments We would like to thank the authors of [2] and [3] for making their
code available online.
References
[1] J. Huang and D. Mumford. Statistics of natural images and models. In Proc. of the Conf. on
Computer Vision and Pattern Recognition, pages 1541?1547, Ft. Collins, CO, USA, 1999.
[2] L. Sendur and I.W. Selesnick. Bivariate shrinkage with local variance estimation. IEEE Signal
Processing Letters, 9(12):438?441, 2002.
[3] J. Portilla, V. Strela, M. Wainwright, and E. P. Simoncelli. Image denoising using scale mixtures
of Gaussians in the wavelet domain. IEEE Trans Image Processing, 12(11):1338?1351, 2003.
[4] E.P. Simoncelli and W.T. Freeman. A flexible architecture for multi-scale derivative computation.
In IEEE Second Int?l Conf on Image Processing, Washington DC, 1995.
[5] E.P. Simoncelli. Modeling the joint statistics of images in the wavelet domain. In Proc SPIE,
44th Annual Meeting, volume 3813, pages 188?195, Denver, 1999.
[6] G.E. Hinton and Y.W. Teh. Discovering multiple constraints that are frequently approximately
satisfied. In Proc. of the Conf. on Uncertainty in Artificial Intelligence, pages 227?234, 2001.
[7] E.P. Simoncelli and E.H. Adelson. Noise removal via bayesian wavelet coring. In 3rd IEEE Int?l
Conf on Image Processing, Laussanne Switzerland, 1996.
[8] X. Li and M.T. Orchard. Spatially adaptive image denoising under over-complete expansion. In
IEEE Int?l. conf. on Image Processing, Vancouver, BC, 2000.
[9] M. Kivanc, I. Kozintsev, K. Ramchandran, and P. Moulin. Low-complexity image denoising
based on statistical modeling of wavelet coefficients. IEEE Signal Proc. Letters, 6:300?303,
1999.
| 2797 |@word compression:2 seems:3 decomposition:3 covariance:6 simplifying:1 incurs:1 mention:1 solid:1 moment:6 bc:1 outperforms:2 elliptical:1 comparing:2 predetermined:1 shape:2 kyb:1 plot:1 drop:1 rrt:1 generative:3 selected:1 implying:1 discovering:1 intelligence:1 indicative:1 iso:1 filtered:1 location:1 simpler:1 atj:1 become:2 qualitative:1 hci:1 introduce:1 expected:2 behavior:1 mpg:2 frequently:1 multi:4 lena:6 inspired:1 freeman:1 kozintsev:1 window:2 increasing:1 becomes:1 provided:1 estimating:1 lowest:1 what:1 strela:1 kind:2 interpreted:3 argmin:1 finding:1 dubbed:1 transformation:2 nj:1 pseudo:1 every:1 collecting:2 act:4 concave:1 tie:3 control:2 unit:2 planck:1 positive:1 understood:1 local:7 approximately:1 lcv:1 nz:1 collect:1 co:1 averaged:2 directed:1 practical:4 acknowledgment:1 hood:1 yj:1 procedure:2 steerable:2 area:1 empirical:3 online3:1 adapting:1 significantly:2 matching:3 induce:1 onto:1 selection:2 www:2 equivalent:1 center:5 yj2:3 estimator:1 orthonormal:2 today:1 play:1 origin:1 trend:1 recognition:1 iib:2 particularly:2 coarser:1 gehler:1 bottom:4 observed:1 ep:12 role:1 ft:1 capture:4 wj:6 removed:2 substantial:1 zc2:3 complexity:1 ideally:1 personal:1 compromise:1 basis:9 easily:3 joint:6 represented:1 various:2 describe:4 artificial:1 lift:1 neighborhood:8 choosing:1 larger:3 reconstruct:1 statistic:6 transform:6 jointly:3 noisy:6 itself:2 online:1 advantage:1 eigenvalue:2 kurtosis:1 took:1 propose:1 product:6 adaptation:1 neighboring:2 uci:1 aligned:2 j2:2 turned:1 bow:3 adapts:1 participating:2 parent:5 cluster:2 regularity:1 zp:2 produce:1 develop:1 iwf:2 ij:2 minor:1 progress:1 implies:1 indicate:2 switzerland:1 correct:1 filter:5 centered:1 hx:1 generalization:3 decompose:1 biological:1 extension:1 around:3 ic:1 exp:4 lm:3 early:1 jx:1 estimation:8 proc:4 currently:1 sensitive:2 weighted:1 gaussian:10 always:1 aim:1 cr:1 shrinkage:3 ej:1 varying:1 np2:1 improvement:1 laussanne:1 modelling:4 indicates:1 mainly:1 contrast:1 pentium:1 posteriori:1 dependent:2 unlikely:1 typically:1 transformed:1 wij:4 interested:1 germany:1 semantics:3 pixel:4 among:1 flexible:1 orientation:4 html:2 multiplies:1 development:1 art:2 special:1 marginal:2 field:2 construct:2 washington:1 sampling:2 represents:4 adelson:1 peaked:2 report:1 np:10 simplify:1 contaminated:1 oriented:1 argmax:1 phase:1 cns:1 highly:1 saturated:1 alignment:1 violation:1 mixture:5 pc:1 edge:24 entertain:1 necessary:2 spemannstra:1 iv:1 abundant:1 re:1 causal:1 instance:1 earlier:1 modeling:2 kurtotic:1 restoration:2 ordinary:1 introducing:1 subset:1 successful:2 reported:2 dependency:17 adaptively:1 density:2 retain:1 probabilistic:2 picking:1 enhance:1 together:1 w1:1 squared:1 again:1 satisfied:1 huang:1 possibly:1 conf:5 derivative:1 li:1 de:2 downloadable:1 includes:1 coefficient:48 int:3 satisfy:1 caused:1 ranking:1 depends:2 start:1 denoised:4 participant:2 contribution:1 ni:4 accuracy:1 wiener:4 variance:6 efficiently:1 ofthe:1 modelled:1 bayesian:1 iterated:3 produced:1 confirmed:1 cybernetics:1 straight:1 published:2 classified:1 gsm:32 cumbersome:1 decorrelated:9 whenever:1 competitor:3 against:1 frequency:1 naturally:1 lior:3 spie:1 irvine:1 gain:3 recall:1 psnr:9 javier:1 anticipating:1 back:1 higher:8 done:1 box:1 correlation:2 hand:3 eqn:9 nonlinear:1 quality:1 artifact:4 aj:1 indicated:1 usa:1 effect:2 true:2 hence:2 spatially:1 symmetric:2 inferior:1 biv:3 generalized:1 prominent:1 complete:4 performs:4 reflection:1 image:50 consideration:2 common:1 superior:3 jpeg2000:1 denver:1 volume:1 discussed:2 interpretation:3 tail:2 extend:2 significant:1 ai:2 rd:1 consistency:1 iia:4 i6:1 maxj6:1 fingerprint:2 base:1 multivariate:1 closest:2 recent:2 barbara:9 reverse:2 ubingen:1 meeting:1 yi:4 accomplished:1 captured:1 seen:1 minimum:1 moulin:1 freely:1 recognized:1 converge:1 determine:3 subtraction:1 maximize:1 dashed:1 ii:5 signal:3 full:2 desirable:1 simoncelli:4 infer:1 stem:1 multiple:1 calculation:1 long:1 equally:1 laplacian:1 vision:1 histogram:2 represent:1 normalization:1 iteration:1 pyramid:6 addition:3 cropped:1 source:1 w2:1 unlike:1 undergo:1 undirected:1 db:4 perts:5 reflective:1 presence:1 noting:1 iii:4 fit:3 zi:11 pepper:2 architecture:1 reduce:1 translates:1 expression:1 six:2 penalty:4 peter:1 proceed:1 matlab:1 transforms:4 amount:1 band:2 ten:1 locally:1 category:1 ugr:1 http:3 exist:1 zj:2 sign:1 estimated:3 redundancy:1 changing:1 penalizing:1 clean:2 sum:2 run:1 inverse:2 zj2:2 fourth:2 letter:2 uncertainty:1 patch:1 squareroot:1 capturing:1 bound:2 guaranteed:1 display:1 copied:2 annual:1 activity:8 adapted:1 orthogonality:1 constraint:4 software:1 separable:1 sphering:1 department:1 according:1 orchard:1 remain:1 describes:1 across:1 slightly:2 wi:1 making:1 happens:1 outlier:2 invariant:3 equation:1 mutually:1 discus:1 describing:2 turn:1 needed:1 know:1 tractable:1 adopted:1 available:2 gaussians:1 subbands:2 observe:1 appropriate:1 blurry:1 encounter:1 top:4 pgehler:2 clustering:4 remaining:1 denotes:1 graphical:1 include:2 ensure:1 opportunity:1 subband:7 build:1 uj:1 objective:1 realized:1 restored:1 strategy:2 mumford:1 diagonal:1 exhibit:2 distance:1 separate:1 thank:1 participate:3 collected:2 tuebingen:1 reason:1 enforcing:1 assuming:1 ru:1 length:1 denoiser:1 index:1 code:1 inwards:1 cij:1 potentially:1 sharper:1 expense:2 negative:1 implementation:1 zt:1 perform:1 teh:1 upper:4 vertical:1 observation:3 markov:1 benchmark:1 extended:1 communication:1 hinton:1 dc:1 portilla:1 highpass:1 community:1 inverting:1 california:1 conflicting:1 trans:1 below:2 pattern:2 max:2 including:1 wainwright:1 ia:3 suitable:1 decorrelation:1 natural:2 satisfaction:1 haar:1 boat:4 residual:2 improve:1 axis:2 paradigm:2 text:2 prior:1 understanding:1 literature:5 removal:1 vancouver:1 relative:1 permutation:1 interesting:1 proportional:1 degree:1 coring:3 heavy:2 penalized:1 summary:1 zc:3 side:3 allow:1 understand:1 institute:1 fall:2 neighbor:1 absolute:1 sparse:1 distributed:1 ghz:1 curve:1 boundary:2 pert:15 contour:6 author:2 collection:1 commonly:1 adaptive:3 simplified:2 welling:2 yi4:1 approximate:1 implicitly:1 keep:2 active:2 investigating:1 conclude:1 search:1 iterative:1 table:2 learn:1 robust:1 du:1 expansion:1 excellent:1 necessarily:1 artificially:1 domain:12 diag:1 main:2 yi2:6 noise:20 denoise:2 hcj:3 precision:2 position:1 candidate:1 house:2 ib:2 wavelet:59 down:5 nyu:1 decay:1 bivariate:4 exists:1 effectively:1 ramchandran:1 locality:1 suited:1 zjt:1 simply:2 univariate:1 relies:1 conditional:2 viewed:1 sized:1 change:1 except:1 contrasted:1 denoising:22 called:1 total:1 pas:1 e:1 select:2 collins:1 cumulant:2 philosophy:1 phenomenon:2 correlated:2 |
1,978 | 2,798 | Worst-Case Bounds for Gaussian Process Models
Sham M. Kakade
University of Pennsylvania
Matthias W. Seeger
UC Berkeley
Dean P. Foster
University of Pennsylvania
Abstract
We present a competitive analysis of some non-parametric Bayesian algorithms in a worst-case online learning setting, where no probabilistic
assumptions about the generation of the data are made. We consider
models which use a Gaussian process prior (over the space of all functions) and provide bounds on the regret (under the log loss) for commonly used non-parametric Bayesian algorithms ? including Gaussian
regression and logistic regression ? which show how these algorithms
can perform favorably under rather general conditions. These bounds explicitly handle the infinite dimensionality of these non-parametric classes
in a natural way. We also make formal connections to the minimax and
minimum description length (MDL) framework. Here, we show precisely
how Bayesian Gaussian regression is a minimax strategy.
1
Introduction
We study an online (sequential) prediction setting in which, at each timestep, the learner is
given some input from the set X , and the learner must predict the output variable from the
set Y. The sequence {(xt , yt )| t = 1, . . . , T } is chosen by Nature (or by an adversary), and
importantly, we do not make any statistical assumptions about its source: our statements
hold for all sequences. Our goal is to sequentially code the next label yt , given that we
have observed x?t and y <t (where x?t and y <t denote the sequences {x1 , . . . xt } and
{y1 , . . . yt?1 }). At each time t, we have a conditional distribution P (?|x?t , y <t ) over Y,
which is our prediction strategy that is used to predict the next variable yt . We then incur
the instantaneous loss ? log P (yt |x?t , y <t ) (referred to as log loss), and the cumulative
loss is the sum of these instantaneous losses over t = 1, . . . , T .
Let ? be a parameter space indexing elementary prediction rules in some model class,
where P (y|x, ?) for ? ? ? is a conditional distribution over Y called the likelihood. An expert is a single atom ? ? ?, or, more precisely, the algorithm which outputs the predictive
distribution P (?|xt , ?) for every t. We are interested in bounds on the regret ? the difference in the cumulative loss of a given adaptive prediction strategy and the the cumulative
loss of the best possible expert chosen in hindsight from a subset of ?.
Kakade and Ng [2004] considered a parametric setting where ? = Rd , X = Rd , and
the prediction rules were generalized linear models, in which P (y|x, ?) = P (y|? ? x).
They derived regret bounds for the Bayesian strategy (assuming a Gaussian prior over ?),
which showed that many simple Bayesian algorithms (such as Gaussian linear regression
and logistic regression) perform favorably when compared, in retrospect, to the best ? ? ?.
Importantly, these regret bounds have a time and dimensionality dependence of the form
d
2 log T ? a dependence common in in most MDL procedures (see Grunwald [2005]). For
Gaussian linear regression, the bounds of Kakade and Ng [2004] are comparable to the best
bounds in the literature, such as those of Foster [1991], Vovk [2001], Azoury and Warmuth
[2001] (though these latter bounds are stated in terms of the closely related square loss).
In this paper, we provide worst-case regret bounds on Bayesian non-parametric methods,
which show how these algorithms can have low regret. In particular, we examine the case
where the prior (over functions) is a Gaussian process ? thereby extending the work of
Kakade and Ng [2004] to infinite-dimensional spaces of experts. There are a number of
important differences between this and the parametric setting. First, it turns out that the
natural competitor class is the reproducing kernel Hilbert space (RKHS) H. Furthermore,
the notion of dimensionality is more subtle, since the space H may be infinite dimensional.
In general, there is no apriori reason that any strategy (including the Bayesian one) should
be able to compete favorably with the complex class H. However, for some input sequences x?T and kernels, we show that it is possible to compete favorably. Furthermore,
the relation of our results to Kakade and Ng [2004] is made explicit in Section 3.2.
Our second contribution is in making formal connections to minimax theory, where we
show precisely how Bayesian Gaussian regression is a minimax algorithm. In a general
setting, Shtarkov [1987] showed that a certain normalized maximum likelihood (NML) distribution minimizes the regret in the worst case. Unfortunately, for some ?complex? model
classes, there may exist no strategy which achieves finite regret, and so the NML distribution may not exist.1 Gaussian density estimation (formally described in Example 4.2) is
one such case where this NML distribution does not exist. If one makes further restrictions
(on Y), then minimax results can be derived, such as in Takimoto and Warmuth [2000],
Barron et al. [1998], Foster and Stine [2001].
Instead of making further restrictions, we propose minimizing a form of a penalized regret,
where one penalizes more ?complex? experts as measured by their cost under a prior q(?).
This penalized regret essentially compares our cumulative loss to the loss of a two part code
(common in MDL, see Grunwald [2005]), where one first codes the model ? under a prior
q and then codes the data using this ?. Here, we show that a certain normalized maximum
a posteriori distribution is the corresponding minimax strategy, in general. Our main result
here is in showing that for Gaussian regression, the Bayesian strategy is precisely this
minimax strategy. The differences between this result and that of Takimoto and Warmuth
[2000] are notable. In the later, they assume Y ? R is bounded and derive (near) minimax
algorithms which hold the variance of their predictions constant at each timestep (so they
effectively deal with the square loss). Under Bayes rule, the variance of the predictions
adapts, which allows the minimax property to hold with Y = R being unbounded.
Other minimax results have been considered in the non-parametric setting. The work of
Opper and Haussler [1998] and Cesa-Bianchi and Lugosi [2001] provide minimax bounds
in some non-parametric cases (in terms of a covering number of the comparator class),
though they do not consider input sequences.
The rest of the paper is organized as follows: Section 2 summarizes our model, Section 3
presents and discusses our bounds, and Section 4 draws out the connections to the minimax
and MDL framework. All proofs are available in a forthcoming longer version of this paper.
2
Bayesian Methods with Gaussian Process Priors
With a Bayesian prior distribution Pbayes (?) over ?, the Bayesian predicts yt using the rule
Z
Pbayes (yt |x?t , y <t ) = P (yt |xt , ?)Pbayes (?|x<t , y <t ) d?
where the posterior is given by
Pbayes (?|x<t , y <t ) ? P (y <t |x<t , ?)Pbayes (?).
1
For these cases, the normalization constant of the NML distribution is not finite.
Assuming the Bayesian learner models the data to be independent given ?, then
P (y <t |x<t , ?) =
t?1
Y
P (yt0 |xt0 , ?) .
t0 =1
It is important to stress that these are ?working assumptions? in the sense that they lead to
a prediction strategy (the Bayesian one), but the analysis does not make any probabilistic
assumptions about the generation of the data. The cumulative loss of the Bayesian strategy
is then
T
X
?
log Pbayes (yt |x?t , y <t ) = ? log Pbayes (y ?T |x?T ).
t=1
which follows form the chain rule of conditional probabilities.
In this paper, we are interested in non-parametric prediction, which can be viewed as working with an infinite-dimensional function space ? ? assume ? consists of real-valued
functions u(x). The likelihood P (y|x, u(?)) is thus a distribution over y given x and the
function u(?). Similar to Kakade and Ng [2004] (where they considered generalized linear
models), we make the natural restriction that P (y|x, u(?)) = P (y|u(x)). We can think of
u as a latent function and of P (y|u(x)) as a noise distribution. Two particularly important
cases are that of Gaussian regression and logistic regression. In Gaussian regression, we
have that Y = R and that P (y|u(x)) = N (y|u(x), ? 2 ) (so y is distributed as a Gaussian with mean u(x) and fixed variance ? 2 ). In logistic regression, Y = {?1, 1} and
P (y|u(x)) = (1 + e?yu(x) )?1 .
In this paper, we consider the case in which the prior dPbayes (u(?)) is a zero-mean Gaussian process (GP) with covariance function K, i.e. a real-valued random process which has
the property that for every finite set x1 , . . . , xn the random vector (u(x1 ), . . . , u(xn ))T is
multivariate Gaussian, distributed as N (0, K ), where K ? Rn,n is the covariance (or
kernel) matrix with K i,j = K(xi , xj ). Note that K has to be a positive semidefinite function in that for all finite sets x1 , . . . , xn the corresponding kernel matrices K are positive
semidefinite.
Finally, we specify the subset of experts we would like the Bayesian prediction strategy to
compete against. Every positive semidefinite kernel K is associated with a unique reproducing kernel Hilbert space (RKHS) H, defined as follows: considerPthe linear space of all
n
finite kernel expansions (over any x1 , . . . , xn ) of the form f (x) = i=1 ?i K(x, xi ) with
the inner product
?
?
X
X
X
?
?j K(?, yj )? =
?i ?j K(xi , yj ).
?i K(?, xi ),
j
i
K
i,j
and define the RKHS H as the completion
of this space. By construction, H contains all
Pn
finite kernel expansions f (x) = i=1 ?i K(x, xi ) with
kf k2K = ?T K ?,
K i,j = K(xi , xj ) .
(1)
The characteristic property of H is that all (Dirac) evaluation functionals are represented in
H itself by the functions K(?, xi ), meaning (f, K(?, xi ))K = f (xi ). The RKHS H turns
out to be the largest subspace of experts for which our results are meaningful.
3
Worst-Case Bounds
In this section, we present our worst-case bounds, give an interpretation, and relate the
results to the parametric case of Kakade and Ng [2004]. The proofs are available in a
forthcoming longer version.
Theorem 3.1: Let (x?T , y ?T ) be a sequence from (X ? Y)T . For all functions f in the
RKHS H associated with the prior covariance function K, we have
1
1
? log Pbayes (y ?T |x?T ) ? ? log P (y ?T |x?T , f (?)) + kf k2K + log |I + cK | ,
2
2
where kf kK is the RKHS norm of f , K = (K(xt , xt0 )) ? RT,T is the kernel matrix over
the input sequence x?T , and c > 0 is a constant such that for all yt ? y ?T ,
?
d2
log P (yt |u) ? c
du2
for all u ? R.
The proof of this theorem parallels that provided by Kakade and Ng [2004], with a number
of added complexities for handling GP priors. For the special case of Gaussian regression
where c = ? ?2 , the following theorem shows the stronger result that the bound is satisfied
with an equality for all sequences.
Theorem 3.2: Assume P (yt |u(xt )) = N (yt |u(xt ), ? 2 ) and that Y = R. Let (x?T , y ?T )
be a sequence from (X ? Y)T . Then,
1
? log Pbayes (y ?T |x?T ) = min ? log P (y ?T |x?T , f (?)) + kf k2K
f ?H
2
(2)
1
?2
+ log I + ? K
2
and the minimum is attained for a kernel expansion over x?T .
This equality has important implications in our minimax theory (in Corollary 4.4, we make
this precise). It is not hard to see that the equality does not hold for other likelihoods.
3.1
Interpretation
The regret bound depends on two terms, kf k2K and log |I + cK |. We discuss each in turn.
The dependence on kf k2K states the intuitive fact that a meaningful bound can only be
obtained under smoothness assumptions on the set of experts. The more complicated f is
(as measured by k ? kK ), the higher the regret may be. The equality shows in Theorem 3.2
shows this dependence is unavoidable. We come back to this dependence in Section 4.
Let us now interpret the log |I + cK | term, which we refer to as the regret term. The
constant c, which bounds the curvature of the likelihood, exists for most commonly used
exponential family likelihoods. For logistic regression, we have c = 1/4, and for the
Gaussian regression, we have c = ? ?2 . Also, interestingly, while f is an arbitrary function
in H, this regret term depends on K only at the sequence points x?T .
For most infinite-dimensional kernels and without strong restrictions on the inputs, the
regret term can be as large as ?(T ) ? the sequence can be chosen s.t. K ? c0 I, which
implies that log |I + cK | ? T log(1 + cc0 ). For example, for an isotropic kernel (which
is a function of the norm kx ? x0 k2 ) we can choose the xt to be mutually far from each
other. For kernels which barely enforce smoothness ? e.g. the Ornstein-Uhlenbeck kernel
exp(?bkx ? x0 k1 ) ? the regret term can easily ?(T ). The cases we are interested in are
those where the regret term is o(T ), in which case the average regret tends to 0 with time.
A spectral interpretation of this term helps us understand the behavior. If we let the
?1 , ?2 , . . . ?T be the eigenvalues of K , then
log |I + cK | =
T
X
t=1
log(1 + c?t ) ? c tr K
where tr K is the trace of K . This last quantity is closely related to the ?degrees of
freedom? in a system (see Hastie et al. [2001]). Clearly, if the sum of the eigenvalues has
a sublinear growth rate of o(T ), then the average regret tends to 0. Also, if one assumes
that the input sequence, x?T , is i.i.d. then the above eigenvalues are essentially the process
eigenvalues. In a forthcoming longer version, we explore this spectral interpretation in
more detail and provide a case using the exponential kernel in which the regret grows as
O(poly(log T )). We now review the parametric case.
3.2
The Parametric Case
Here we obtain a slight generalization of the result in Kakade and Ng [2004] as a special
case. Namely, the familiar linear model ? with u(x) = ? ? x, ?, x ? Rd and Gaussian
prior ? ? N (0, I) ? can be seen as a GP model with the linear kernel: K(x, x0 ) = x ? x0 .
P
With X = (x1 , . . . xT )T we have that a kernel expansion f (x) = i ?i xi ? x = ? ? x with
? = X T ?, and kf k2K = ?T X X T ? = k?k22 , so that H = {? ? x | ? ? Rd }, and so
log |I + cK | = log I + cX T X
Therefore, our result gives an input-dependent version of the result of Kakade and Ng
[2004]. If we make the further assumption that kxk2 ? 1 (as done in Kakade and Ng
[2004]), then we can obtain exactly their regret term:
cT
log |I + cK | ? d log 1 +
d
which can seen by rotating K into an diagonal matrix and maximizing the expression subject to the constraint that kxk2 ? 1 (i.e. that the eigenvalues must sum to 1).
In general, this example shows that if K is a finite-dimension kernel such as the linear or
the polynomial kernel, then the regret term is only O(log T ).
4
Relationships to Minimax Procedures and MDL
This section builds the framework for understanding the minimax property of Gaussian regression. We start by reviewing Shtarkov?s theorem, which shows that a certain normalized
maximum likelihood density is the minimax strategy (when using the log loss). In many
cases, this minimax strategy does not exist ? in those cases where the minimax regret is
infinite. We then propose a different, penalized notion of regret, and show that a certain
normalized maximum a posteriori density is the minimax strategy here. Our main result
(Corollary 4.4) shows that for Gaussian regression the Bayesian strategy is precisely this
minimax strategy
4.1
Normalized Maximum Likelihood
Here, let us assume that there are no inputs ? sequences consist of only yt ? Y. Given a
measurable space with base measure ?, we employ a countable number of random variables
yt in Y. Fix the sequence length T and define the model class F = {Q(?|?) | ? ? ?)},
where Q(?|?) denotes a joint probability density over Y T with respect to ?.
We assume that for our model class there exists a parameter, ?ml (y ?T ), maximizing the
likelihood Q(y ?T |?) over ? for all y ?T ? Y T . We make this assumption to make the
connections to maximum likelihood (and, later, MAP) estimation clear. Define the regret
of a joint density P on y ?T as:
R(y ?T , P, ?) = ? log P (y ?T ) ? inf {? log Q(y ?T |?)}
(3)
???
= ? log P (y ?T ) + log Q(y ?T |?ml (y ?T ))
.
(4)
where the latter step uses our assumption on the existence of ?ml (y ?T ).
Define the minimax regret with respect to ? as:
R(?) = inf
sup
P y
T
?T ?Y
R(y ?T , P, ?)
where the inf is over all probability densities on Y T .
The following theorem due to Shtarkov [1987] characterizes the minimax strategy.
Theorem 4.1: [Shtarkov, 1987]If the following density exists (i.e. if it has a finite normalization constant), then define it to be the normalized maximum likelihood (NML) density.
Pml (y ?T ) = R
Q(y ?T |?ml (y ?T ))
Q(y ?T |?ml (y ?T ))d?(y ?T )
(5)
If Pml exists, it is a minimax strategy, i.e. for all y ?T , the regret R(y ?T , Pml , ?) does not
exceed R(?).
Note that this density exists only if the normalizing constant is finite, which is not the case
in general. The proof is straightforward using the fact that the NML density is an equalizer
? meaning that it has constant regret on all sequences.
Proof:
First note that
R
log Q(y ?T |?ml (y ?T ))d?(y ?T ).
and simplify.
the regret R(y ?T , Pml , ?) is the constant
To see this, simply substitute Eq. 5 into Eq. 4
For convenience, define the regret of any P as R(P, ?) = supy?T ?Y T R(y ?T , P, ?). For
any P 6= Pml (differing on a set with positive measure), there exists some y ?T such that
P (y ?T ) < Pml (y ?T ), since the densities are normalized. This implies that
R(P, ?) ? R(y ?T , P, ?) > R(y ?T , Pml , ?) = R(Pml , ?)
where the first step follows from the definition of R(P, ?), the second from
? log P (y ?T ) > ? log Pml (y ?T ), and the last from the fact that Pml is an equalizer (its
regret is constant on all sequences). Hence, P has a strictly larger regret, implying that Pml
is the unique minimax strategy.
Unfortunately, in many important model classes, the minimax regret R(?) is not finite, and
the NML density does not exist. We now provide one example (see Grunwald [2005] for
further discussion).
Example 4.2: Consider a model which assumes the sequence is generated i.i.d. from
a Gaussian with unknown mean and unit variance. Specifically, let ? = R, Y = R,
and P (y ?T |?) be the product ?Tt=1 N (yt ; ?, 1). It is easy to see that for this class the
minimax regret is infinite and Pml does not exist (see Grunwald [2005]). This example
can be generalized to the Gaussian regression model (if we know the sequence x?T in
advance). For this problem, if one modifies the space of allowable sequences (i.e. Y T is
modified), then one can obtain finite regret, such as those in Barron et al. [1998], Foster
and Stine [2001]. This technique may not be appropriate in general.
4.2
Normalized Maximum a Posteriori
To remedy this problem, consider placing some structure on the model class F =
{Q(?|?)|? ? ?}. The idea is to penalize Q(?|?) ? F based on this structure. The motivation is similar to that of structural risk minimization [Vapnik, 1998]. Assume that ? is
a measurable space and place a prior distribution with density function q on ?. Define the
penalized regret of P on y ?T as:
Rq (y ?T , P, ?) = ? log P (y ?T ) ? inf {? log Q(y ?T |?) ? log q(?)} .
???
Note that ? log Q(y ?T |?) ? log q(?) can be viewed as a ?two part? code, in which we
first code ? under the prior q and then code y ?T under the likelihood Q(?|?). Unlike the
standard regret, the penalized regret can be viewed as a comparison to an actual code.
These two part codes are common in the MDL literature (see Grunwald [2005]). However,
in MDL, they consider using minimax schemes (via Pml ) for the likelihood part of the code,
while we consider minimax schemes for this penalized regret.
Again, for clarity, assume there exists a parameter, ?map (y ?T ) maximizing log Q(y ?T |?)+
log q(?). Notice that this is just the maximum aposteriori (MAP) parameter, if one were to
use a Bayesian strategy with the prior q (since the posterior density would be proportional
to Q(y ?T |?)q(?)). Here,
Rq (y ?T , P, ?) = ? log P (y ?T ) + log Q(y ?T |?map (y ?T )) + log q(?map (y ?T ))
Similarly, with respect to ?, define the minimax penalized regret as:
Rq (?) = inf
sup
P y
T
?T ?Y
Rq (y ?T P, ?)
where again the inf is over all densities on Y T . If ? is finite or countable and Q(?|?) > 0
for all ?, then the Bayes procedure has the desirable property of having penalized regret
which is non-positive.2 However, in general, the Bayes procedure does not achieve the
minimax penalized regret, Rq (?), which is what we desire ? though, for one case, we
show that it does (in the next section).
We now characterize this minimax strategy in general.
Theorem 4.3: Define the normalized maximum a posteriori (NMAP) density, if it exists, as:
Pmap (y ?T ) = R
Q(y ?T |?map (y ?T ))q(?map (y ?T ))
.
Q(y ?T |?map (y ?T ))q(?map (y ?T )) d?(y ?T )
(6)
If Pmap exists, it is a minimax strategy for the penalized regret, i.e. for all y ?T , the penalized
regret Rq (y ?T , Pmap , ?) does not exceed Rq (?).
The proof relies on Pmap being an equalizer for the penalized regret and is identical to that
of Theorem 4.1 ? just replace all quantities with their penalized equivalents.
4.3
Bayesian Gaussian Regression as a Minimax Procedure
We now return to the setting with inputs and show how the Bayesian strategy for the Gaussian regression model is a minimax strategy for all input sequences x?T . If we fix the input
sequence x?T , we can consider the competitor class to be F = {P (y ?T |x?T , ?) | ? ?
?)}. In other words, we make the more stringent comparison against a model class which
has full knowledge of the input sequence in advance. Importantly, note that the learner only
observes the past inputs x<t at time t.
Consider the Gaussian regression model, with likelihood P (y ?T |x?T , u(?)) =
N (y ?T |u(x?T ), ? 2 I), where u(?) is some function and I is the T ? T identity. For
2
To see this, simply observe that Pbayes (y ?T )
=
Q(y ?T |?map (y ?T ))q(?map (y ?T )) and take the ? log of both sides.
P
?
Q(y ?T |?)q(?)
?
technical reasons, we do not define the class of competitor functions ? to be the RKHS H,
PT
but instead define ? = {u(?)| u(x) = t=1 ?t K(x, xt ), ? ? RT } ? the set of kernel
expansions over x?T . The model class is then F = {P (?|x?T , u(?)) | u ? ?}. The representer theorem implies that competing against ? is equivalent to competing against the
RKHS.
It is easy to see that for this case, the NML density does not exist (recall Example 4.2) ? the
comparator class ? contains very complex functions. However, the case is quite different
for the penalized regret. Now let us consider using a GP prior. We choose q to be the
corresponding density over ?, which means that q(u) is proportional to exp(?kuk2K /2),
where kuk2K = ?T K ? with K i,j = K(xi , xj ) (recall Eq. 1). Now note that the penalty
? log q(u) is just the RKHS norm kuk2K /2, up to an additive constant.
Using Theorem 4.3 and the equality in Theorem 3.2, we have the following corollary,
which shows that the Bayesian strategy is precisely the NMAP distribution (for Gaussian
regression).
Corollary 4.4: For any x?T , in the Gaussian regression setting described above ? where
F and ? are defined with respect to x?T and where q is the GP prior over ? ? we
have that Pbayes is a minimax strategy for the penalized regret, i.e. for all y ?T , the regret
Rq (y ?T , Pbayes , ?) does not exceed Rq (?). Furthermore, Pbayes and Pmap are densities of
the same distribution.
Importantly, note that, while the competitor class F is constructed with full knowledge of
x?T in advance, the Bayesian strategy, Pbayes , can be implemented in an online manner in
that it only needs to know x<t for prediction at time t.
Acknowledgments
We thank Manfred Opper and Manfred Warmuth for helpful discussions.
References
K. S. Azoury and M. Warmuth. Relative loss bounds for on-line density estimation with the exponential family of distributions. Machine Learning, 43(3), 2001.
A. Barron, J. Rissanen, and B. Yu. The minimum description length principle in coding and modeling.
IEEE Trans. Information Theory, 44, 1998.
Nicolo Cesa-Bianchi and Gabor Lugosi. Worst-case bounds for the logarithmic loss of predictors.
Machine Learning, 43, 2001.
D. P. Foster. Prediction in the worst case. Annals of Statistics, 19, 1991.
D. P. Foster and R. A. Stine. The competitive complexity ratio. Proceedings of 2001 Conf on Info
Sci and Sys, WP8, 2001.
P.D. Grunwald. A tutorial introduction to the minimum description length principle. Advances in
MDL: Theory and Applications, 2005.
T. Hastie, R. Tibshirani, , and J. Friedman. The Elements of Statistical Learning. Springer, 2001.
S. M. Kakade and A. Y. Ng. Online bounds for bayesian algorithms. Proceedings of Neural Information Processing Systems, 2004.
M. Opper and D. Haussler. Worst case prediction over sequences under log loss. The Mathematics
of Information Coding, Extraction and Distribution, 1998.
Y. Shtarkov. Universal sequential coding of single messages. Problems of Information Transmission,
23, 1987.
E. Takimoto and M. Warmuth. The minimax strategy for Gaussian density estimation. Proc. 13th
Annu. Conference on Comput. Learning Theory, 2000.
Vladimir N. Vapnik. Statistical Learning Theory. Wiley, 1st edition, 1998.
V. Vovk. Competitive on-line statistics. International Statistical Review, 69, 2001.
| 2798 |@word version:4 polynomial:1 norm:3 stronger:1 c0:1 d2:1 covariance:3 thereby:1 tr:2 contains:2 rkhs:9 interestingly:1 past:1 must:2 stine:3 additive:1 implying:1 warmuth:6 isotropic:1 sys:1 manfred:2 unbounded:1 shtarkov:5 constructed:1 consists:1 pmap:5 manner:1 x0:4 behavior:1 examine:1 actual:1 provided:1 bounded:1 what:1 minimizes:1 differing:1 hindsight:1 berkeley:1 every:3 growth:1 exactly:1 k2:1 unit:1 positive:5 tends:2 lugosi:2 unique:2 acknowledgment:1 yj:2 regret:48 procedure:5 universal:1 gabor:1 word:1 convenience:1 risk:1 restriction:4 equivalent:2 dean:1 measurable:2 yt:16 maximizing:3 map:11 straightforward:1 modifies:1 rule:5 haussler:2 importantly:4 handle:1 notion:2 annals:1 construction:1 pt:1 us:1 element:1 particularly:1 predicts:1 observed:1 worst:9 pbayes:14 observes:1 rq:9 complexity:2 reviewing:1 incur:1 predictive:1 learner:4 easily:1 joint:2 represented:1 quite:1 larger:1 valued:2 statistic:2 think:1 gp:5 itself:1 online:4 sequence:23 eigenvalue:5 matthias:1 propose:2 product:2 achieve:1 adapts:1 description:3 intuitive:1 dirac:1 transmission:1 extending:1 help:1 derive:1 completion:1 measured:2 eq:3 strong:1 implemented:1 come:1 implies:3 nml:8 closely:2 stringent:1 fix:2 generalization:1 elementary:1 strictly:1 hold:4 considered:3 exp:2 predict:2 achieves:1 estimation:4 proc:1 label:1 largest:1 minimization:1 clearly:1 gaussian:30 modified:1 rather:1 ck:7 pn:1 corollary:4 derived:2 likelihood:14 seeger:1 sense:1 posteriori:4 helpful:1 dependent:1 relation:1 interested:3 special:2 uc:1 apriori:1 having:1 ng:11 atom:1 extraction:1 identical:1 placing:1 yu:2 representer:1 simplify:1 employ:1 familiar:1 friedman:1 freedom:1 message:1 evaluation:1 mdl:8 semidefinite:3 chain:1 implication:1 penalizes:1 rotating:1 modeling:1 cost:1 subset:2 predictor:1 characterize:1 st:1 density:21 international:1 probabilistic:2 again:2 cesa:2 satisfied:1 unavoidable:1 choose:2 conf:1 expert:7 return:1 coding:3 notable:1 explicitly:1 depends:2 ornstein:1 later:2 sup:2 characterizes:1 competitive:3 bayes:3 start:1 parallel:1 complicated:1 contribution:1 square:2 variance:4 characteristic:1 bayesian:23 definition:1 competitor:4 against:4 proof:6 associated:2 recall:2 knowledge:2 dimensionality:3 hilbert:2 subtle:1 organized:1 back:1 attained:1 higher:1 specify:1 done:1 though:3 furthermore:3 just:3 retrospect:1 working:2 logistic:5 grows:1 k22:1 normalized:9 remedy:1 equality:5 hence:1 deal:1 covering:1 generalized:3 allowable:1 stress:1 tt:1 meaning:2 instantaneous:2 common:3 interpretation:4 slight:1 interpret:1 refer:1 bkx:1 smoothness:2 rd:4 mathematics:1 similarly:1 longer:3 base:1 nicolo:1 du2:1 curvature:1 posterior:2 multivariate:1 showed:2 inf:6 certain:4 seen:2 minimum:4 full:2 desirable:1 sham:1 technical:1 prediction:13 regression:23 essentially:2 kernel:20 normalization:2 uhlenbeck:1 penalize:1 source:1 rest:1 unlike:1 subject:1 structural:1 near:1 exceed:3 easy:2 xj:3 forthcoming:3 pennsylvania:2 hastie:2 competing:2 inner:1 idea:1 t0:1 expression:1 penalty:1 clear:1 exist:7 tutorial:1 notice:1 tibshirani:1 rissanen:1 clarity:1 takimoto:3 timestep:2 equalizer:3 sum:3 compete:3 place:1 family:2 draw:1 summarizes:1 comparable:1 bound:21 ct:1 precisely:6 constraint:1 min:1 pml:13 kakade:12 making:2 indexing:1 mutually:1 discus:2 turn:3 know:2 available:2 observe:1 barron:3 enforce:1 spectral:2 appropriate:1 existence:1 substitute:1 assumes:2 denotes:1 k1:1 build:1 added:1 quantity:2 parametric:12 strategy:29 dependence:5 rt:2 diagonal:1 subspace:1 thank:1 sci:1 reason:2 barely:1 assuming:2 length:4 code:10 relationship:1 kk:2 ratio:1 minimizing:1 vladimir:1 unfortunately:2 statement:1 relate:1 favorably:4 info:1 trace:1 stated:1 countable:2 unknown:1 perform:2 bianchi:2 finite:12 precise:1 y1:1 rn:1 reproducing:2 arbitrary:1 namely:1 connection:4 trans:1 able:1 adversary:1 including:2 natural:3 minimax:36 scheme:2 cc0:1 prior:16 literature:2 review:2 understanding:1 kf:7 relative:1 loss:16 sublinear:1 generation:2 proportional:2 aposteriori:1 supy:1 degree:1 foster:6 principle:2 yt0:1 penalized:15 last:2 formal:2 side:1 understand:1 distributed:2 opper:3 xn:4 dimension:1 cumulative:5 made:2 commonly:2 adaptive:1 far:1 functionals:1 ml:6 sequentially:1 xi:11 latent:1 nature:1 expansion:5 complex:4 poly:1 main:2 azoury:2 k2k:6 motivation:1 noise:1 edition:1 x1:6 referred:1 grunwald:6 wiley:1 explicit:1 exponential:3 comput:1 kxk2:2 theorem:13 annu:1 xt:10 showing:1 normalizing:1 exists:9 consist:1 vapnik:2 sequential:2 effectively:1 kx:1 cx:1 logarithmic:1 simply:2 explore:1 xt0:2 desire:1 springer:1 relies:1 comparator:2 conditional:3 goal:1 viewed:3 identity:1 replace:1 hard:1 infinite:7 specifically:1 vovk:2 called:1 meaningful:2 formally:1 latter:2 handling:1 |
1,979 | 2,799 | Identifying Distributed Object Representations
in Human Extrastriate Visual Cortex
Rory Sayres
Department of Neuroscience
Stanford University
Stanford, CA 94305
[email protected]
David Ress
Department of Neuroscience
Brown University
Providence, RI 02912
[email protected]
Kalanit Grill-Spector
Departments of Neuroscience and Psychology
Stanford University
Stanford, CA 94305
[email protected]
Abstract
The category of visual stimuli has been reliably decoded from patterns
of neural activity in extrastriate visual cortex [1]. It has yet to be seen
whether object identity can be inferred from this activity. We present
fMRI data measuring responses in human extrastriate cortex to a set of
12 distinct object images. We use a simple winner-take-all classifier,
using half the data from each recording session as a training set, to
evaluate encoding of object identity across fMRI voxels. Since this
approach is sensitive to the inclusion of noisy voxels, we describe two
methods for identifying subsets of voxels in the data which optimally
distinguish object identity. One method characterizes the reliability of
each voxel within subsets of the data, while another estimates the
mutual information of each voxel with the stimulus set. We find that
both metrics can identify subsets of the data which reliably encode
object identity, even when noisy measurements are artificially added to
the data. The mutual information metric is less efficient at this task,
likely due to constraints in fMRI data.
1
Introduction
Humans and other primates can perform fast and efficient object recognition. This ability
is mediated within a large extent of occipital and temporal cortex, sometimes referred to
as the ventral processing stream [10]. This cortex has been examined using
electrophysiological recordings, optical imaging techniques, and a variety of
neuroimaging techniques including functional magnetic resonance imaging (fMRI) [refs].
With fMRI, these regions can be reliably identified by their strong preferential response
to intact objects over other visual stimuli [9,10].
The functional organization of object-selective cortex is unclear. A number of regions
have been identified within this cortex, which preferentially respond to particular
categories of images [refs]; it has been proposed that these regions are specialized for
processing visual information about those categories [refs]. A recent study by Haxby and
colleagues [1] found that the category identity of different stimuli could be decoded from
fMRI response patterns, using a simple classifier in which half of each data set was used
as a training set and half as a test set. These results were interpreted as evidence for a
distributed representation of objects across ventral cortex, in which both positive and
negative responses contribute information about object identity. It is not clear, however,
to what extent information about objects is processed at the category level, and to what
extent it reflects individual object identity, or features within objects [1,8].
The study in [1] is one of a growing number of recent attempts to decode stimulus
identity by examining fMRI response patterns across cortex [1-4]. fMRI data has
particular advantages and disadvantages for this approach. Among its advantages are the
ability to make many measurements across a large extent of cortex in awake, behaving
humans. Its disadvantages include temporal and spatial resolution constraints, which limit
the number of trials that may be collected; the ability to examine trial-by-trial variation;
and potentially limit the localization of small neuronal populations. A further potential
disadvantage arises from the little-understood functional organization of object-selective
cortical regions. Because it is not clear which parts of this cortex are involved in
representing different objects and which aren?t, analyses may include fMRI image
locations (voxels) which are not involved in object representation.
The present study addresses a number of these questions by examining the response
patterns across object-selective cortex to a set of 12 individual object images, using highresolution fMRI. We sought to address the following experimental questions: (1) Can
individual object identity be decoded from fMRI responses in object-selective cortex? (2)
How can one identify those subsets of fMRI voxels which reliably encode identity about
a stimulus, among a large set of potentially unrelated voxels? We adopt a similar
approach to that described in [1], subdividing each data set into training and test subsets,
and evaluate the efficiency of a set of voxels in discriminating object identity among the
12 possible images with a simple winner-take-all classifier. We then describe two metrics
from which to identify sets of voxels which reliably discriminate different objects. The
first metric estimates the replicability of voxels to each stimulus between the training and
the test data. The second metric estimates the mutual information each voxel has with the
stimulus set.
2
Experimental design and data collection
Our experimental design is summarized in Figure 1. We chose a stimulus set of 12 line
drawings of different object stimuli, shown in Figure 1a. These objects can be readily
categorized as faces, animals, or vehicles; these categories have been previously
identified as producing distinct patterns of blood-oxygenation-level-dependent (BOLD)
response in object-selective cortex [10]. This allows us to compare category and object
identity as potential explanatory factors for BOLD response patterns. Further, the use of
black-and-white line drawings reduces the number of stimulus features which
differentiate the stimuli, such as spatial frequency bands.
A typical trial is illustrated in Figure 1b. We presented one of the 12 object images to the
subject within the foveal 5 degrees of visual field for 2 sec, then masked the image with a
scrambled version of a random image for 10 sec. These scrambled images are known to
produce minimal response in our regions of interest [11], and serve as a baseline
condition for these experiments. Each scan contained one trial per image, presented in a
randomized order. We ran 10-15 event-related scans for each scanning session. This
allowed us to collect full hemodynamic responses to each image, which in BOLD signal
lags several seconds after stimulus onset. In this way we were able to analyze trial-bytrial variations in response to different images, without the analytic and design
restrictions involved in analyzing fMRI data with more closely-spaced trials [5]. This
feature was essential for computing the mutual information of a voxel with the stimulus
set.
a)
b)
face1
face2
face3
face4
2 sec
donkey
buffalo
ferret
dragster
truck
bus
boxster
c)
Left ? Right
Posterior ? Anterior
bull
10 sec
Figure 1: Experimental Design. (a) The 12 object stimuli used. (b) Example of a typical
trial. (c) Depiction of imaged region during one session. The image is an axial slice from
a T1-weighted anatomical image for one subject. The blue region shows the region
imaged at high resolution. The white outlines show gray matter within the imaged area.
We obtained high-resolution fMRI images at 3 Tesla using a spiral-out protocol. We used
a custom-built receive-only surface coil. This coil was small and flexible, with a 7.5 cm
diameter, and could be placed on a subject?s skull directly over the region to be imaged.
Because of the restricted field of view of this coil, we imaged only right hemisphere
cortex for these experiments. We imaged 4 subjects (1 female), each of whom
participated in multiple recording sessions. For each recording session, we imaged 12
oblique slices, with voxel dimensions of 1 x 1 x 1 mm and a frame period of 2 seconds.
(More typical fMRI resolutions are around 3 x 3 x 3 mm?3x3x6 mm, at least 27 times
lower in resolution.) A typical imaging prescription, superimposed over a high-resolution
T1-weighted anatomical image, is shown in Figure 1c.
Functional data from these experiments are illustrated in Figure 2. Within each session,
we identified object-selective voxels by applying a general linear model to the time series
data, estimating the amplitude of BOLD response to different images [5]. We then
computed contrast maps representing T tests of response of different images against the
baseline scrambled condition. An example of voxels localized in this way is illustrated in
Figure 2a, superimposed over mean T1-weighted anatomical images for two slices. Our
criterion for defining object-selective voxels was that a voxel needed to respond to at
least one of the 12 stimulus images relative to baseline with a significance level of p ?
0.001. Each data set contained between 600 and 2500 object-selective voxels.
The design of our surface coil, combined with its proximity to the imaged cortex, allowed
us to observe significant event-related responses within single voxels. Figure 2b shows
peri-stimulus time courses to each image from four sample voxels. These responses are
summarized by subtracting the mean BOLD response after stimulus onset with the
response during the baseline period, as illustrated in Figure 2c. In this way we can
summarize a data set as a matrix A of response amplitudes to different voxels, where Ai,j
represents the response to the ith image of the jth voxel. These responses are statistically
significant (T test, p < 0.001) for many stimuli, yet the voxels are heterogeneous in their
responses?different voxels respond to different stimuli. This response diversity prompts
the questions of deciding which sets of responses, if any, are informative of image
identity.
face1
face2
c)
face1
face3
face4
5
face2
face3
bull
donkey
4
face4
3
bull
buffalo
ferret
donkey
2
buffalo
dragster
truck
ferret
1
dragster
Response Amplitude [% Signal]
b)
a)
truck
bus
boxster
0
bus
10 sec
1
2
3
4
1
Voxel
Anterior ? ? Posterior
Right
? ? Left
10% Signal
Change
boxster
2
3
4
-1
Voxel
Figure 2: Experimental Data. (a) T1-weighted anatomical images from a sample session,
with object-selective voxels indicated in orange. (b) Mean peristimulus time courses from
4 object-selective voxels in the lower slice of (a) (locations indicated by arrow), for each
image. Dotted lines indicate trial onset; dark bars at bottom indicate stimulus presentation
duration. Scale bars indicate 10 seconds duration and 10 percent BOLD signal change
relative to baseline. (c) Mean response amplitudes from the voxels depicted in (b),
represented as a set of column vectors for each voxel. Color indicates mean amplitude
during post-stimulus period relative to pre-stimulus period.
3
Winner-take-all classifier
Given a set of response amplitudes across object-selective voxels, how can we
characterize the discriminabilty of responses to different stimuli? This question can be
answered by constructing a classifier, which takes a set of responses to an unknown
stimulus, and compares it to a training set of responses to known stimuli. This general
approach has been successfully applied to fMRI responses in early visual cortex [3-4],
object-selective cortex [1], and across multiple cortical regions [2].
For our classifier, we adopt the approach used in [1], with a few refinements. As in the
previous study, we subdivide each data set into a training set and a test set, with the
training set representing odd-numbered runs and the test set representing even-numbered
runs. (Since each run contains one trial per image, this is equivalent to using odd- and
even-numbered trials). We construct a training matrix, Atraining, in which each row
represents the response across voxels to a different image in the training data set. We
construct a second matrix, Atest, which contains the responses to different images during
the test set. These matrices are illustrated for one data set in Figure 3a. Each row of Atest
is considered to be the response to an unknown stimulus, and is compared to each of the
rows in Atraining. The overall performance of the classifier is evaluated by its success
rate at classifying test responses based on the correlation to training responses.
b)
Test Image
face1
face2
face3
face4
bull
donkey
buffalo
ferret
dragster
truck
bus
boxster
Test Data
Session A
Test Image
600
800 1000
Voxels
Training Image
Session B
Percent Correct: 42 %
1
2
3
4
5
6
7
8
9
10
11
12
1 2 3 4 5 6 7 8 9101112
-20
0
20
Training Image
Response Amplitude [% Signal]
-0.1
1 2 3 4 5 6 7 8 9101112
Training Image
c)
400
1
2
3
4
5
6
7
8
9
10
11
12
1 2 3 4 5 6 7 8 9101112
face1
face2
face3
face4
bull
donkey
buffalo
ferret
dragster
truck
bus
boxster
200
Percent Correct: 100 %
1
2
3
4
5
6
7
8
9
10
11
12
Test Image
Training Data
Test Image
a)
0
0.1
1
2
3
4
5
6
7
8
9
10
11
12
1 2 3 4 5 6 7 8 9101112
Training Image
0.2
Correlation Coefficient
Figure 3: Illustration of winner-take-all classifier for two sample sessions. (a) Response
amplitudes for all object-selective voxels for the training (top) and test (bottom) data sets,
for one recording session. (b) Classifier results for the same session as in (a). Left:
Correlation matrix between the training and test sets. Right: Results of the winner-takeall algorithm. The red square in each row represents the image from the test set that
produced the highest correlation with the training set, and is the ?guess? of the classifier.
The percent correct is evaluated as the number of guesses that lie along the diagonal (the
same image in the training and test sets produces the highest correlation). (c) Results for a
second session, in the same format as (b).
We evaluate classifier performance with a winner-take-all criterion, which is more
conservative than the criterion in [1]. First, a correlation matrix R is constructed
containing correlation coefficients for each pairwise comparison of rows in Atraining and
Atest (shown on the left in Figure 3b and 3c for two data sets). The element Ri,j represents
the correlation coefficient between row i of Atest and row j of Atraining. Then, for each row
in the correlation matrix, the classifier ?guesses? the identity of the test stimulus by
selecting the element with the highest coefficient (shown on the right in Figure 3b and
3c). Correct guesses lie along the diagonal of this matrix, Ri,i.
The previously-used method evaluated classifier performance by successively pairing off
the correct stimulus with incorrect stimuli from the training set [1]. With this criterion,
responses from the test set which do not correlate maximally with the same stimulus in
the training set might still lead to high classifier performance. For instance, if an element
Ri,i is larger than all but one coefficient in row i, pairwise comparisons would reveal
correct guesses for 10 out of 11 comparisons, or 91% correct, while the winner-take-all
criterion would consider this 0%. This conservative criterion reduces chance performance
from 1/2 to 1/12, and ensures that high classifier performance reflects a high level of
discriminability between different stimuli, providing a stringent test for decoding.
4
Identifying voxels which distinguish objects
When we examined response patterns across all object-selective voxels, we observed
high levels of classifier performance from some recording sessions, as shown in Session
A in Figure 3. Many sessions, however, were more similar to Session B: limited success
at decoding object identity when using all voxels.
For both cases, a relevant question is the extent to which information is contained within
a subset of the selected voxel. The distributed representation implied in Session A may be
driven by only a few informative voxels; conversely, excessively noisy or unrelated
activity from other voxels may be affected classifier performance on Session B. This is of
particular concern given that the functional organization of this cortex is not well
understood. In addition to using such classifiers to test a hypothesis that a pre-defined
region of interest can discriminate stimuli, it would be highly useful to use the classifier
to identify cortical regions which represent a stimulus.
To identify subsets of the data which reliably represent different stimuli, we search
among the set of object-selective voxels using two metrics to rank voxels: (1) The
reliability of each voxel between the training and test data subsets; and (2) The mutual
information of each voxel with the stimulus set.
4.1
Voxel reliability metric
The voxel reliability metric is computed for each voxel by taking the vectors of 12
response amplitudes to each stimulus in the training and test sets, and calculating their
correlation coefficient. Voxels with high reliability will have high values for the diagonal
elements in the R correlation matrix, but this does not place constraints on correlations
for the off-diagonal comparisons. For instance, persistently active and nonspecific voxels
(such as might be expected from draining veins or sinuses) would have high voxel
reliability, but also high correlation for all pairwise comparisons between stimuli in test
and training sets, so as not to guarantee high classifier performance.
4.2
Mutual information metric
The mutual information for a voxel is computed as the difference between the overall
entropy of the voxel and the ?noise entropy?, the sum over all stimuli of the entropy of
the voxel given each stimulus [6]:
I m=H ? H noise =?? P ? r ?log 2 P ? r ??? P ? s? P ?r?s? log 2 P ?r?s?
r
?1?
s ,r
In this formula, P(r) represents the probability of observing a response level r and P(r|s)
represents the probability of observing response r given stimulus s. Computing these
probabilities presents a difficulty for fMRI data, since an accurate estimate requires many
trials. Given the hemodynamic lag of 9-16 sec inherent to measuring BOLD signal, and
the limitations of keeping a human observer in an MRI scanner before motion artifacts or
attentional drifts confound the signals, it is difficult to obtain many trials over which to
evaluate different response probabilities. There are two possible solutions to this: find
ways of obtaining large number of trials, e. g. through co-registering data across many
sessions; and reduce the number of possible response bins for the data. While the first
option is an area of active pursuit for us, we will focus here on the second approach.
Given the low number of trials per image, we reduce the number of possible response
levels to only two bins, 0 and 1. This allows for a wider range of possible values for P(r)
and P(r|s) at the expense of ignoring potential information contained in varying response
levels. Given these two bins, the next question is deciding how to threshold responses to
decide if a given voxel responded significantly (r=1) or not (r=0) on a given trial. Since
we do not have an a priori hypothesis about the value of this threshold, we choose it
separately for each voxel, such that it maximizes the mutual information of that voxel.
This approach has been used previously to reduce free parameters while developing
artificial recognition models[7].
Classifier Performance
[% Correct]
a)
b)
c)
100
100
100
50
50
50
Voxel Reliability
Mutual Information
0
0
500 1000 1500
Subset Size
[Voxels]
0
0
0.5
1
Subset Size
[Proportion of all voxels]
0
Chance Performance
0
500 1000 1500
Subset Size
[Voxels]
Figure 4: Comparison of metrics for identifying reliable subsets of voxels in data sets. (a)
Performance on winner-take-all classifier of different-sized subsets of one data set
(?Session B? in Figure 3), sorted by voxel reliability (gray, solid) and mutual information
(red, dashed) metrics. (b) Performance of the two metrics across 12 data sets. Each curve
represents the mean (thick line) ? standard error of the mean across data sets. (c)
Performance on data set from (a) when reverse-sorting voxels by each metric. Dotted
black line indicates chance performance.
After ranking each voxel with the two metrics, we evaluated how well these voxels found
reliable object representations. To do this, we sorted the voxels in descending order
according to each metric; selected progressively larger subsets of voxels, starting with the
10 highest-ranked voxels and proceeding to the full set of voxels; and evaluated
performance on the classifier for each subset. Results of these analyses are summarized in
Figure 4. Figure 4a shows performance curves for the two sortings on data from the
?Session B? data set illustrated in Figure 3. As can be seen, while performance using all
voxels is at 42% correct, by removing voxels, performance quickly reaches 100% using
the reliability criterion. The mutual information metric also converges to 100%, albeit
slightly more slowly. Also note that for very small subset sizes, performance decreases
again: correct discrimination requires information distributed across a set of voxels.
Finally, we repeated our analyses across 12 data sets collected from 4 subjects. Figure 4c
shows the mean performance across sessions for the two metrics. These curves are
normalized by the proportion of total available voxels for each data set. Overall, the voxel
reliability metric was significantly better at identifying subsets of voxels which could
discriminate object identity, although both metrics performed significantly better than the
1/12 chance performance at the classifier task, and both produced pronounced
improvements in performance for smaller subsets compared to using the entire data sets.
Note that simply removing voxels does not guarantee the better performance on the
classifier. If the voxels are sorted in reverse order, starting with e. g. the lowest values of
voxel reliability or mutual information, subsets containing half the voxels are consistently
at or below chance performance (Figure 4c).
5
Summary and conclusions
Developing and training classifiers to identify cognitive states based on fMRI data is a
growing and promising approach for neuroscience [1-4]. One drawback to these methods,
however, is that they often require prior knowledge of which voxels are involved in
specifying a cognitive state, and which aren?t. Given the poorly-understood functional
organization of the majority of cortex, an important goal is to develop methods to search
across cortex for regions which represent such states. The results described here represent
one step in this direction.
Our voxel-ranking metrics successfully identified subsets of object-selective voxels
which discriminate object identity. This demonstrates the feasibility of adapting classifier
methods to search across cortical regions. However, these methods can be refined
considerably. The most important improvement is providing a larger set of trials from
which to compute response probabilities. This is currently being pursued by combining
data sets from multiple recording sessions in a reference volume. Given more extensive
data, the set of possible response bins can be increased from the current binary set, which
should improve performance of our mutual information metric.
Our results also have several implications for object recognition. We found a high ability
to discriminate between individual images in our data sets. Moreover, this discrimination
could be performed with sets of voxels of widely varying sizes. For some sessions,
perfect discrimination could be achieved using all object-selective voxels, which number
in the thousands (Figure 3a, 3b); for many others, perfect discrimination was possible
using subsets as small as a few dozen voxels. This has implications for the distributed
nature of object representation in extrastriate cortex. However, it raises the question of
identifying redundant information within these representations. The distributed
representations may reflect functionally distinct areas which are processing different
aspects of each stimulus, as in earlier visual cortex. Mutual information approaches have
succeeded at identifying redundant coding of information in other sensory areas [10], and
can be tested on the known functional subdivisions in early visual cortex. In this way, we
can use intuitions generated by ideal observers of the data, such as the classifier described
here,and apply them to understanding how the brain processes this information.
Acknowledgments
We would like to thank Gal Chechik and Brian Wandell for input on analysis techniques.
This work was supported by NEI National Research Service Award 5F31EY015937-02
to RAS, and a research grant 2005-05-111-RES from the Whitehall Foundation to KGS.
References
[1] Haxby JV, Gobbini MI, Furey ML, Ishai A, Schouten JL, and Pietrini P. (2001) Distributed and
overlapping representations of faces and objects in ventral temporal cortex. Science 293:2425-30.
[2] Wang X, Hutchinson R, and Mitchell TM (2004) Training fMRI classifiers to distinguish
cognitive states across multiple subjects. In S. Thrun, L. Saul and B. Scholk?pf (eds.), Advances in
Neural Information Processing Systems 16. Cambridge, MA: MIT Press.
[3] Kamitani Y and Tong F. (2005) Decoding the visual and subjective contents of the human
brain. Nat Neurosci.8:679-85.
[4] Haynes JD and Rees G. (2005) Predicting the orientation of invisible stimuli from activity in
human primary visual cortex. Nat Neurosci.8:686-691.
[5] Burock MA and Dale AM. (2000) Estimation and Detection of Event-Related fMRI Signals
with temporally correlated noise: a statistically efficient and unbiased approach. Human Brain
Mapping 11:249-260.
[6] Abbott L and Dayan P (2001) Theoretical Neuroscience. Cambridge, MA: MIT Press.
[7] Ullman S, Vidal-Naquet M, and Sali E. Visual features of intermediate complexity and their use
in classification. Nat Neurosci. 5(7):682-7.
[8] Tsunoda K, Yamane Y, Nishizaki M, and Tanifuji M. (2001) Complex objects are represented
in macaque inferotemporal cortex by the combination of feature columns. Nat Neurosci.4:832-8.
[9] Grill-Spector K, Kushnir T, Hendler T, and Malach R. (2000) The dynamics of object-selective
activation correlate with recognition performance in humans. Nat Neurosci. 3:837-43.
[10] Malach R, Reppas JB, Benson RR, Kwong KK, Jiang H, Kennedy WA, Ledden PJ, Brady TJ,
Rosen BR, and Tootell RB. (1995) Object-related activity revealed by functional magnetic
resonance imaging in human occipital cortex. Proc Natl Acad Sci U S A 92:8135-8139.
[11] Chechik G, Globerson A, Anderson MJ, Young ED, Nelken I, and Tishby N. (2001) Groups
redundancy measures reveal redundancy reduction along the auditory pathway. Advances in Neural
Information Processing Systems 14. Cambridge, MA: MIT Press.
| 2799 |@word trial:17 mri:1 version:1 proportion:2 solid:1 reduction:1 extrastriate:4 foveal:1 series:1 contains:2 selecting:1 subjective:1 current:1 anterior:2 activation:1 yet:2 readily:1 informative:2 oxygenation:1 analytic:1 haxby:2 progressively:1 discrimination:4 half:4 selected:2 guess:5 pursued:1 ith:1 oblique:1 contribute:1 location:2 along:3 constructed:1 registering:1 pairing:1 incorrect:1 pathway:1 pairwise:3 ra:1 expected:1 subdividing:1 examine:1 growing:2 brain:3 little:1 pf:1 estimating:1 unrelated:2 moreover:1 maximizes:1 furey:1 lowest:1 what:2 kg:1 cm:1 interpreted:1 psych:1 gal:1 brady:1 guarantee:2 temporal:3 classifier:29 demonstrates:1 grant:1 producing:1 positive:1 t1:4 understood:3 before:1 service:1 limit:2 acad:1 encoding:1 analyzing:1 jiang:1 black:2 chose:1 might:2 discriminability:1 examined:2 collect:1 conversely:1 specifying:1 co:1 limited:1 range:1 statistically:2 acknowledgment:1 globerson:1 area:4 significantly:3 adapting:1 chechik:2 pre:2 numbered:3 tootell:1 applying:1 descending:1 restriction:1 equivalent:1 map:1 nonspecific:1 occipital:2 duration:2 starting:2 resolution:6 identifying:7 population:1 variation:2 decode:1 hypothesis:2 element:4 persistently:1 recognition:4 malach:2 vein:1 bottom:2 observed:1 wang:1 thousand:1 region:14 ensures:1 decrease:1 highest:4 ran:1 intuition:1 complexity:1 dynamic:1 raise:1 serve:1 localization:1 efficiency:1 represented:2 distinct:3 fast:1 describe:2 artificial:1 refined:1 lag:2 stanford:6 spector:2 larger:3 widely:1 drawing:2 ability:4 noisy:3 differentiate:1 advantage:2 rr:1 subtracting:1 relevant:1 combining:1 poorly:1 pronounced:1 produce:2 perfect:2 converges:1 object:52 wider:1 develop:1 axial:1 odd:2 strong:1 indicate:3 direction:1 thick:1 closely:1 correct:10 drawback:1 human:10 kwong:1 stringent:1 bin:4 require:1 brian:1 mm:3 proximity:1 around:1 considered:1 scanner:1 deciding:2 mapping:1 ventral:3 sought:1 adopt:2 early:2 estimation:1 proc:1 currently:1 sensitive:1 successfully:2 reflects:2 weighted:4 mit:3 kalanit:2 varying:2 encode:2 focus:1 improvement:2 consistently:1 rank:1 superimposed:2 indicates:2 contrast:1 baseline:5 am:1 dependent:1 dayan:1 entire:1 explanatory:1 selective:18 overall:3 among:4 flexible:1 orientation:1 classification:1 priori:1 resonance:2 spatial:2 animal:1 orange:1 mutual:14 field:2 construct:2 haynes:1 represents:7 fmri:20 rosen:1 others:1 stimulus:43 jb:1 inherent:1 few:3 national:1 individual:4 attempt:1 detection:1 organization:4 interest:2 highly:1 custom:1 tj:1 natl:1 implication:2 accurate:1 succeeded:1 preferential:1 re:1 theoretical:1 minimal:1 instance:2 column:2 increased:1 earlier:1 disadvantage:3 measuring:2 bull:5 subset:21 masked:1 examining:2 tishby:1 optimally:1 characterize:1 providence:1 ishai:1 scanning:1 hutchinson:1 considerably:1 combined:1 rees:1 peri:1 randomized:1 discriminating:1 off:2 decoding:3 quickly:1 again:1 reflect:1 successively:1 containing:2 choose:1 slowly:1 cognitive:3 draining:1 ullman:1 potential:3 diversity:1 summarized:3 bold:7 sec:6 coefficient:6 matter:1 coding:1 kamitani:1 ranking:2 onset:3 stream:1 performed:2 vehicle:1 view:1 observer:2 analyze:1 characterizes:1 red:2 observing:2 option:1 square:1 responded:1 spaced:1 identify:6 produced:2 kennedy:1 reach:1 ed:2 against:1 colleague:1 frequency:1 involved:4 mi:1 auditory:1 mitchell:1 color:1 knowledge:1 electrophysiological:1 amplitude:9 response:50 maximally:1 evaluated:5 anderson:1 correlation:13 replicability:1 overlapping:1 artifact:1 indicated:2 reveal:2 gray:2 excessively:1 brown:2 normalized:1 unbiased:1 imaged:8 illustrated:6 white:2 during:4 criterion:7 scholk:1 highresolution:1 outline:1 invisible:1 motion:1 percent:4 image:40 specialized:1 wandell:1 functional:8 winner:8 volume:1 jl:1 functionally:1 measurement:2 significant:2 cambridge:3 ai:1 session:25 inclusion:1 reliability:11 cortex:28 behaving:1 depiction:1 surface:2 inferotemporal:1 posterior:2 recent:2 female:1 hemisphere:1 driven:1 reverse:2 binary:1 success:2 seen:2 period:4 redundant:2 signal:8 dashed:1 full:2 multiple:4 reduces:2 prescription:1 post:1 award:1 feasibility:1 heterogeneous:1 metric:21 sinus:1 sometimes:1 represent:4 achieved:1 receive:1 addition:1 participated:1 separately:1 ferret:5 recording:7 subject:6 ideal:1 intermediate:1 revealed:1 spiral:1 variety:1 psychology:1 identified:5 reduce:3 tm:1 br:1 donkey:5 grill:2 whether:1 useful:1 clear:2 dark:1 band:1 processed:1 category:7 diameter:1 dotted:2 neuroscience:5 per:3 rb:1 anatomical:4 blue:1 affected:1 group:1 redundancy:2 four:1 threshold:2 blood:1 jv:1 pj:1 abbott:1 imaging:4 sum:1 run:3 respond:3 place:1 decide:1 sali:1 rory:1 distinguish:3 truck:5 activity:5 constraint:3 awake:1 ri:4 aspect:1 answered:1 optical:1 format:1 department:3 developing:2 according:1 combination:1 across:18 slightly:1 smaller:1 skull:1 primate:1 benson:1 restricted:1 confound:1 previously:3 bus:5 needed:1 pursuit:1 available:1 takeall:1 apply:1 observe:1 vidal:1 magnetic:2 subdivide:1 pietrini:1 jd:1 top:1 res:2 include:2 calculating:1 implied:1 added:1 question:7 gobbini:1 primary:1 diagonal:4 unclear:1 attentional:1 thank:1 thrun:1 sci:1 majority:1 whom:1 extent:5 collected:2 illustration:1 providing:2 kk:1 preferentially:1 difficult:1 neuroimaging:1 potentially:2 expense:1 negative:1 design:5 reliably:6 kushnir:1 unknown:2 perform:1 buffalo:5 yamane:1 defining:1 frame:1 nei:1 prompt:1 drift:1 inferred:1 david:1 extensive:1 macaque:1 address:2 able:1 bar:2 below:1 pattern:7 summarize:1 built:1 including:1 reliable:2 event:3 difficulty:1 ranked:1 predicting:1 representing:4 improve:1 temporally:1 hendler:1 mediated:1 prior:1 voxels:57 understanding:1 relative:3 limitation:1 localized:1 foundation:1 degree:1 classifying:1 row:9 course:2 summary:1 placed:1 supported:1 keeping:1 free:1 jth:1 schouten:1 saul:1 face:2 taking:1 distributed:7 slice:4 curve:3 dimension:1 cortical:4 sensory:1 dale:1 collection:1 refinement:1 nelken:1 voxel:29 correlate:2 ml:1 active:2 scrambled:3 search:3 promising:1 nature:1 mj:1 ca:2 ignoring:1 obtaining:1 complex:1 artificially:1 constructing:1 protocol:1 significance:1 neurosci:5 arrow:1 noise:3 allowed:2 ref:3 tesla:1 categorized:1 repeated:1 neuronal:1 referred:1 tong:1 decoded:3 lie:2 young:1 dozen:1 formula:1 removing:2 peristimulus:1 evidence:1 concern:1 essential:1 albeit:1 nat:5 sorting:2 aren:2 entropy:3 depicted:1 simply:1 likely:1 visual:12 contained:4 face2:5 chance:5 ma:4 coil:4 identity:17 presentation:1 sized:1 sorted:3 goal:1 content:1 change:2 naquet:1 typical:4 conservative:2 total:1 discriminate:5 experimental:5 subdivision:1 intact:1 arises:1 scan:2 evaluate:4 hemodynamic:2 tested:1 correlated:1 |
1,980 | 28 | 262
ON TROPISTIC PROCESSING AND ITS APPLICATIONS
Manuel F. Fernandez
General Electric Advanced Technology Laboratories
Syracuse, New York 13221
ABSTRACT
The interaction of a set of tropisms is sufficient in many
cases to explain the seemingly complex behavioral responses
exhibited by varied classes of biological systems to combinations of
stimuli. It can be shown that a straightforward generalization of
the tropism phenomenon allows the efficient implementation of
effective algorithms which appear to respond "intelligently" to
changing environmental conditions. Examples of the utilization of
tropistic processing techniques will be presented in this paper in
applications entailing simulated behavior synthesis, path-planning,
pattern analysis (clustering), and engineering design optimization.
INTRODUCTION
The goal of this paper is to present an intuitive overview of
a general unsupervised procedure for addressing a variety of system
control and cost minimization problems. This procedure is hased on
the idea of utilizing "stimuli" produced by the environment in which
the systems are designed to operate as basis for dynamically
providing the necessary system parameter updates.
This is by no means a new idea: countless examples of this
approach abound in nature, where innate reactions to specific
stimuli ("tropisms" or "taxis" --not to be confused with
"instincts") provide organisms with built-in first-order control
laws for triggering varied responses [8]. (It is hypothesized that
"knowledge" obtained through evolution/adaptation or through
learning then refines or suppresses most of these primal reactions).
Several examples of the implicit utilization of this approach
can also be found in the literature, in applications ranging from
behavior modeling to pattern analysis. Ve very briefly depict some
these applications, underlining a common pattern in their
formulation and generalizing it through the use of basic field
theory concepts and representations. A more rigorous and detailed
exposition --regarding both mathematic and
application/implementation aspects-- is presently under preparation
and should be ready for publication sometime next year ([6]).
TROPISMS
Tropisms can be defined in general as class-invariant systemic
responses to specific sets of stimuli [6]. All time-invariant
systems can thus be viewed as tropistic provided that we allow all
possible stimuli to form part of our set of inputs. In most
tropistic systems, however, response- (or time-) invariance applies
only to specific inputs: green plants, for example, twist and grow
in the direction of light (phototropism), some birds' flight
patterns follow changes in the Earth's magnetic field
(magnetotropism), various organisms react to gravitational field
? American Institute of Physics 1988
263
variations (geotropism), etc.
Tropism/stimuli interactions can be portrayed in term~ of the
superposition of scalar (e.g., potential) or vector (e.g., force)
fields exhibiting properties paralleling those of the suitably
constrained "reactions" we wish to model [1J,[6J. The resulting
field can then be used as a basis for assessing the intrinsic cost
of pursuing any given path of action, and standard techniques (e.g.,
gradient-following in the case of scalar fields or divergence
computation in the case of vector fields) utilized in determining a
response*. In addition, the global view of the situation provided by
field representations suggest that a basic theory of tropistic
behavior can also be formulated in terms of energy expenditure
minimization (Euler-Lagrange equations). This formulation would
yield integral-based representations (Feynman path integrals
[4],[11]) satisfying the observation that tropistic processes
typically obey the principle of least action.
Alternatively, fields may also be collapsed into "attractors"
(points of a given "mass" or "charge" in cost space) through laws
defining the relationships that are to exist among these
"at tractors" and the other particles traveling through the space.
This provides the simplification that when updating dynamically
changing situations only the effects caused by the interaction of
the attractors with the particles of interest --rather than the
whole cost field-- may have to be recalculated.
For example, appropriately positioned point charges exerting
on each other an electrostatic force inversely proportional to the
square of their distance can be used to represent the effects of a
coulombic-type cost potential field. A particle traveling through
this field would now be affected by the combination of forces
ensuing from the interaction of the attractors' charges with its
own. If this particle were then to passively follow the composite of
the effects of these forces it would be following the gradient of
the cost field (i.e., the vector resulting from the superposition of
the forces acting on the particle would point in the direction of
steepest change in potential).
Finally, other representations of tropism/stimuli interactions
(e.g., Value-Driven Decision Theory approaches) entail associating
"profit" functions (usually sigmoidal) with each tropism, modeling
the relative desirability of triggering a reaction as a function of
the time since it was last activated [9]. These representations are
* In order to bring extra insight into
tropism/stimuli
interactions and simplify their formulation, one may exchange vector
and scalar field representations through the
utilization
of
appropriately selected mappings. Some of the most important of such
mappings are the gradient operator (particularly so because the
gradient of a scalar --potential-- field is proportional to a
"force" --vector-- field), the divergence (which may be thought of
as performing in vector fields a function analogous to that
performed in scalar fields by the gradient), and their combinations
(e.g., the Laplacian, a scalar-to-scalar mapping which can be
visualized as performing on potential fields the equivalent of a
second derivative operation.
264
./.~
.'
?
?
?
Model fly as a positive geotropislic point of mass M.
Model fence slakes as negalive geotropislic poinls
with masses m 1 , m z ? ???? mit?
At each update time compute sum offorces acting on
frog:
H
F ? k
d2
"
?
Compute frog's heading and acceleration based on
the ensuing force; then update frog's position.
Figure 1: Attractor-based representation of a frog-fenee-fly
scenario (see [1) for a vector-field representation). The objective
is to model a frog's path-planning decision-making process when
approaching a fly in the presence of obstacles. (The picket fence is
represented by the elliptical outline with an opening in the back,
the fly --inside the fenced space-- is represented by a "+~ sign,
and arrows are used to indicate the direction of a frog's trajectory
into and out of fenced area).
265
particularly amenable to neural-net implementations [6J.
TROPISTIC PROCESSING
Tropistic processing entails building into systems tropisms
appropriate for the environment in which these systems are expected
to operate. This allows taking advantage of environment-produced
"stimuli" for providing the required control for the systems'
behavior.
The idea of tropistic processing has been utilized with good
results in a variety of applications. Arbib et.al., for example,
have implicitly utilized tropistic processing to describe a
batrachian's reaction to its environment in terms of what may be
visualized as magnetic (vector) fields' interactions [1].
Vatanabe (12) devised for pattern analysis purposes an
interaction of tropisms ("geotropisms") in which pattern "atoms" are
attracted to each other, and hence "clustered", subject to a
squared-inverse-distance ("feature distance") law similiar to that
from gravitational mechanics. It can be seen that if each pattern
atom were considered an "organism", its behavior would not be
conceptually different from that exhibited by Arbibian frogs: in
both cases organisms passively follow the force vectors resulting
from the interaction of the environmental stimuli with the
organisms' tropisms. It is interesting, though, to note that the
"organisms'" behavior will nonetheless appear "intelligent" to the
casual observer.
The ability of tropistic processes to emulate seemingly
rational behavior is now begining to be explored and utilized in the
development of synthetic-psychological models and experiments.
Braitenberg, for example, has placed tropisms as the primal building
block from which his models for cognition, reason, and emotions
evolve [3]**; Barto [2] has suggested the possibility of combining
tropisms and associative (reinforced) learning, with aims at
enabling the automatic triggering of behavioral responses by
previously experienced situations; and Fernandez [6] has used
CROBOTS [10], a virtual multiprocessor emulator, as laboratory for
evaluating the effects of modifying tropistic responses on the basis
of their projected future consequences.
Other applications of tropistic processing presently being
investigated include path-planning and engineering design
optimization [6]. For example, consider an air-reconnaissance
mission deep behind enemy lines; as the mission progresses and
unexpected SAM sites are discovered, contingency flight paths may be
developed in real time simply by modeling each SAM or interdiction
site as a mass point towards which the aircraft exhibits negative
geotropistic tendencies (i.e., gravitational forces repel it), and
modeling the objective as a positive geotropistic point. A path to
** Of particular interest within the sole context of Tropistic
Processing is Dewdney's [5] commented version of the first chapters
of Braitenberg's book [3J, in which the "behavior" of mechanically
very simple cars, provided with "~yes" and phototropism-supporting
connections (including Ledley-type "neurons" [4J), is "analyzed".
266
.
?
??
"":,~
?
??
?
,.
?
???
?
?
-.
ill' ",:"
-
?
?
??
?
? ??
?
A ??
?
??
? ??
??? ?
??
?
?
e
,,-
?? ?
??
? ,"
?
~:::
*'
?
--?
?
?
~!::.
?
??
.
?? ?
?
8
?
e
.~~
..
Figure ~ (Geotropistic clustering ~2]): The problem being portrayed
here is that of clustering dots distributed in [x,y]-space as shown
and uniformly in color ([red,blue,green]). The approach followed is
that outlined in Figure 1, with the differences that normalized
(Hahalanobis) distances are used and when merges occur, conservation
of momentum is observed. Tags are also kept --specifying with which
dots and in what order merges occur-- to allow drawing cluster
boundaries in the original data set. (Efficient implementation of
this clustering technique entails using a ring of processors, each
of which is assigned the "features" of one or more "dots" and the
task of carrying out computations with respect to these features. If
the features of each dot are then transmitted through the ring, all
the forces imposed on it by the rest will have been determined upon
completion of the circuit).
267
the target will then be automatically drawn by the interaction of
the tropisms with the gravitational forces. (Once the mission has
been completed, the target and its effects can be eliminated,
leaving active only the repulsive forces, which will then "guide"
the airplane out of the danger zone).
In engineering design applications such as lens modeling and
design, lenses (gradient-index type, for example) can be modeled in
terms of photons attempting to reach an objective plane through a
three-dimensional scalar field of refraction indices; modeling the
process tropistically (in a manner analogous to that of the
air-reconnaissance example above) would yield the least-action paths
that the individual photons would follow. Similarly, in
"surface-of-revolution" fuselage design ("Newton's Problem"), the
characteristics of the interaction of forces acting within a sheet
of metal foil when external forces (collisions with a fluid's
molecules) are applied can be modeled in terms of tropistic
reactions which will tend to reconfigure the sheet so as to make it
present the least resistance to friction when traversing a fluid.
Additional applications of tropistic processing include target
tracking and multisensor fusion (both can be considered instances of
"clustering") [6], resource allocation and game theory (both closely
related to path-planning) [9], and an assortment of other
cost-minimization functions. Overall, however, one of the most
important applications of tropistic processing may be in the
modeling and understanding of analog processes [6], the imitation of
which may in turn lead to the development of effective strategies
PAST EXPERIENCE
(e.g. MEMORY MAPS)
M
PREDICTED (i.e . MODELLED)
OUTCOUE
OBSERVATlONS
BASIC
mOPISM
FUNCTION
p
RESPONSE
RESPONSE
FUNCTION
TROPISM-BASED SYSTEM
Figure 3: The combination of tropisms and associative (reinforced)
learning can be used to enable the automatic triggering of
behavioral responses by previously experienced situations [2]. Also,
the modeled projection of the future consequences of a tropistic
decision can be utilized in the modification of such decision (6J.
(Note analogy to filtering problem in which past history and
predicted behavior are used to smooth present observations).
268
i
-5000.0
-33?.3
? ? ''''.7
-I
i,
.lll3.J
5000.'
-5000.0
-3l?.l
?
-,
?
''''.7
lJl3.J
5000.0
3lJ3.J
5000.0
i,
i,
oD
01
i,
"',
to
Figure 4: Simplified representation of air-reconnaissance mission
example (see text): objective is at center of coordinate axis, thick
dots represent SAM sites, and arrows denote airplane's direction of
flight (airplane's maximum attainable speed and acceleration are
constrained). All portrayed scenarios are identical except for
tropistic control-law parameters (mainly objective to SAM-sites mass
ratios in the first three scenarios). Varying the masses of the
objective and SAM sites can be interpreted as trading off the
relative importance of the mission vs. the aircraft's safety, and
can produce dramatically differing flight paths, induce chaotic
behavior (bottom-left scenario), or render the system unstable. The
bottom-right scenario portrays the situation in which a tropistic
decision is projected into the future and, if not meeting some
criterion, modified (altering the direction of flight --e.g.,
following an isokline--, re-evaluating the mission's relative
importance --revising masses--, changing the update rate, etc.).
269
for taking full advantage of parallel architectures [11]***. It is
thus expected that the flexibility of tropistic processes to adapt
to changing environmental conditions will prove highly valuable to
the advancement of areas such as robotics, parallel processing and
artificial intelligence, where at the very least they will provide
some decision-making capabilities whenever unforeseen circumstances
are encountered.
ACKNOVLEDGEMENTS
Special thanks to D. P. Bray for the ideas provided in our
many discussions and for the development of the finely detailed
simulations that have enabled the visualization of unexpected
aspects of our work.
REFERENCES
[1] Arbib, M.A. and House, D.H.: "Depth and Detours: Decision Making
in Parallel Systems". IEEE Vorkshop on Languages for Automation:
Cognitive Aspects in Information Processing; pp. 172-180 (1985).
[2] Barto, A.G. (Editor): "Simulation Experiments with Goal-Seeking
Adaptive Elements". Avionics Laboratory, Vright-Patterson Air
Force Base, OH. Report # AFVAL-TR-84-1022. (1984).
[3] Braitenberg, V.: Vehicles: Experiments in Synthetic Psychology.
The MIT Press. (1984).
[4] Cheng, G.C.; Ledley, R.S.; and Ouyang, B.: "Pattern Recognition
with Time Interval Modulation Information Coding". IEEE
Transactions on Aerospace and Electronic Systems. AES-6, No.2;
pp. 221-227 (1970).
[5] Dewdney, A.K.: "Computer Recreations". Scientific American.
Vol.256, No.3; pp. 16-26 (1987).
[6] Fern6ndez, M.F.: "Tropistic Processing". To be published (1988).
[7J Feynman, R.P.: Statistical Mechanics: A Set of Lectures.
Frontiers in Physics Lecture Note Series-zI982).
[8] Hirsch, J.: "Nonadaptive Tropisms and the Evolution of
Behavior". Annals of the New York Academy of Sciences. Vol.223;
pp. 84-88 (1973).
[9] Lucas, G. and Pugh, G.: "Applications of Value-Driven
Automation Methodology for the Control and Coordination of
Netted Sensors in Advanced C**3". Report # RADC-TR-80-223.
Rome Air Development Center, NY. (1980).
[10] Poindexter, T.: "CROBOTS". Manual, programs, and files (1985).
2903 Vinchester Dr., Bloomington, IL., 61701.
[11J Vallqvist, A.; Berne, B.J.; and Pangali, C.: "Exploiting
Physical Parallelism Using Supercomputers: Two Examples from
Chemical Physics". Computer. Vol.20, No.5; pp. 9-21 (1987).
[12] Vatanabe, S.: Pattern Recognition: Human and Mechanical.
John Viley & Sons; pp. 160-168 (1985).
*** Optical Fourier transform operations, for instance, can be
modeled in high-granularity machines through a procedure analogous
to the gradient-index lens simulation example, with processors
representing diffraction-grating "atoms" [6].
| 28 |@word aircraft:2 version:1 briefly:1 suitably:1 d2:1 simulation:3 attainable:1 profit:1 tr:2 series:1 past:2 reaction:6 elliptical:1 od:1 manuel:1 attracted:1 john:1 refines:1 designed:1 update:4 depict:1 v:1 intelligence:1 selected:1 advancement:1 plane:1 steepest:1 provides:1 sigmoidal:1 prove:1 behavioral:3 inside:1 manner:1 expected:2 behavior:11 planning:4 mechanic:2 automatically:1 abound:1 confused:1 provided:4 circuit:1 mass:7 what:2 interpreted:1 ouyang:1 suppresses:1 developed:1 revising:1 differing:1 charge:3 utilization:3 control:5 appear:2 positive:2 safety:1 engineering:3 consequence:2 taxi:1 path:10 modulation:1 bird:1 frog:7 dynamically:2 specifying:1 systemic:1 block:1 chaotic:1 procedure:3 danger:1 area:2 thought:1 composite:1 projection:1 induce:1 suggest:1 operator:1 sheet:2 pugh:1 collapsed:1 context:1 viley:1 equivalent:1 map:1 imposed:1 center:2 straightforward:1 instinct:1 react:1 insight:1 utilizing:1 oh:1 his:1 enabled:1 variation:1 coordinate:1 analogous:3 annals:1 target:3 paralleling:1 element:1 satisfying:1 particularly:2 utilized:5 updating:1 recognition:2 observed:1 bottom:2 fly:4 ledley:2 valuable:1 environment:4 carrying:1 entailing:1 hased:1 upon:1 patterson:1 basis:3 various:1 represented:2 emulate:1 chapter:1 effective:2 describe:1 artificial:1 enemy:1 drawing:1 ability:1 transform:1 seemingly:2 associative:2 advantage:2 intelligently:1 net:1 interaction:11 mission:6 adaptation:1 combining:1 flexibility:1 academy:1 intuitive:1 exploiting:1 cluster:1 assessing:1 produce:1 ring:2 completion:1 sole:1 progress:1 grating:1 predicted:2 indicate:1 trading:1 exhibiting:1 direction:5 thick:1 closely:1 modifying:1 human:1 enable:1 virtual:1 exchange:1 generalization:1 clustered:1 biological:1 frontier:1 gravitational:4 considered:2 recalculated:1 mapping:3 cognition:1 earth:1 purpose:1 sometime:1 superposition:2 coordination:1 minimization:3 mit:2 sensor:1 desirability:1 aim:1 rather:1 modified:1 varying:1 barto:2 publication:1 fence:2 mainly:1 rigorous:1 multiprocessor:1 mathematic:1 typically:1 vatanabe:2 overall:1 among:1 ill:1 lucas:1 development:4 constrained:2 special:1 field:22 emotion:1 once:1 atom:3 eliminated:1 identical:1 unsupervised:1 future:3 braitenberg:3 report:2 stimulus:10 simplify:1 intelligent:1 opening:1 ve:1 divergence:2 individual:1 attractor:4 interest:2 expenditure:1 possibility:1 highly:1 recreation:1 analyzed:1 light:1 primal:2 activated:1 behind:1 amenable:1 integral:2 necessary:1 experience:1 traversing:1 detour:1 re:1 psychological:1 instance:2 modeling:7 obstacle:1 altering:1 cost:7 addressing:1 euler:1 synthetic:2 thanks:1 mechanically:1 physic:3 off:1 reconnaissance:3 synthesis:1 unforeseen:1 squared:1 dr:1 external:1 book:1 american:2 derivative:1 cognitive:1 potential:5 photon:2 coding:1 automation:2 caused:1 fernandez:2 performed:1 view:1 observer:1 vehicle:1 red:1 parallel:3 capability:1 square:1 air:5 il:1 characteristic:1 reinforced:2 yield:2 yes:1 conceptually:1 modelled:1 produced:2 trajectory:1 casual:1 processor:2 history:1 published:1 explain:1 reach:1 whenever:1 manual:1 energy:1 nonetheless:1 pp:6 refraction:1 rational:1 bloomington:1 knowledge:1 car:1 color:1 exerting:1 positioned:1 back:1 follow:4 methodology:1 response:10 formulation:3 underlining:1 though:1 implicit:1 traveling:2 flight:5 scientific:1 innate:1 building:2 effect:5 hypothesized:1 concept:1 normalized:1 evolution:2 hence:1 assigned:1 chemical:1 laboratory:3 game:1 criterion:1 outline:1 bring:1 ranging:1 common:1 physical:1 overview:1 twist:1 avionics:1 analog:1 organism:6 automatic:2 outlined:1 similarly:1 particle:5 language:1 dot:5 entail:3 surface:1 etc:2 electrostatic:1 base:1 own:1 driven:2 scenario:5 meeting:1 seen:1 transmitted:1 additional:1 full:1 smooth:1 adapt:1 dewdney:2 devised:1 laplacian:1 basic:3 circumstance:1 represent:2 robotics:1 addition:1 interval:1 grow:1 leaving:1 appropriately:2 extra:1 operate:2 rest:1 exhibited:2 finely:1 file:1 subject:1 tend:1 presence:1 granularity:1 variety:2 psychology:1 arbib:2 associating:1 triggering:4 approaching:1 architecture:1 idea:4 regarding:1 airplane:3 render:1 resistance:1 york:2 action:3 deep:1 dramatically:1 collision:1 detailed:2 visualized:2 exist:1 sign:1 blue:1 vol:3 affected:1 commented:1 begining:1 drawn:1 changing:4 kept:1 nonadaptive:1 year:1 sum:1 inverse:1 respond:1 pursuing:1 electronic:1 decision:7 diffraction:1 followed:1 simplification:1 cheng:1 encountered:1 bray:1 occur:2 tag:1 aspect:3 speed:1 friction:1 fourier:1 passively:2 performing:2 attempting:1 optical:1 combination:4 son:1 sam:5 making:3 modification:1 presently:2 invariant:2 equation:1 resource:1 previously:2 visualization:1 turn:1 feynman:2 repulsive:1 operation:2 assortment:1 obey:1 appropriate:1 magnetic:2 supercomputer:1 original:1 clustering:5 include:2 completed:1 newton:1 seeking:1 objective:6 strategy:1 exhibit:1 gradient:7 distance:4 simulated:1 ensuing:2 unstable:1 multisensor:1 reason:1 aes:1 index:3 relationship:1 modeled:4 providing:2 ratio:1 coulombic:1 vright:1 negative:1 fluid:2 implementation:4 design:5 observation:2 neuron:1 enabling:1 similiar:1 supporting:1 situation:5 defining:1 rome:1 discovered:1 varied:2 syracuse:1 required:1 mechanical:1 repel:1 connection:1 aerospace:1 merges:2 suggested:1 usually:1 pattern:9 parallelism:1 program:1 built:1 green:2 including:1 memory:1 force:15 advanced:2 representing:1 technology:1 inversely:1 picket:1 axis:1 ready:1 text:1 literature:1 understanding:1 countless:1 determining:1 tractor:1 law:4 relative:3 plant:1 evolve:1 lecture:2 interesting:1 proportional:2 allocation:1 analogy:1 filtering:1 contingency:1 sufficient:1 metal:1 principle:1 editor:1 emulator:1 foil:1 placed:1 last:1 heading:1 guide:1 allow:2 institute:1 taking:2 distributed:1 boundary:1 depth:1 evaluating:2 adaptive:1 projected:2 simplified:1 transaction:1 implicitly:1 global:1 active:1 hirsch:1 conservation:1 alternatively:1 imitation:1 nature:1 molecule:1 investigated:1 complex:1 electric:1 arrow:2 whole:1 site:5 ny:1 experienced:2 position:1 momentum:1 wish:1 portrayed:3 house:1 reconfigure:1 specific:3 revolution:1 explored:1 tropism:18 fusion:1 intrinsic:1 portrays:1 importance:2 generalizing:1 simply:1 lagrange:1 unexpected:2 tracking:1 scalar:8 applies:1 environmental:3 goal:2 viewed:1 formulated:1 acceleration:2 exposition:1 towards:1 change:2 determined:1 except:1 uniformly:1 acting:3 lens:3 invariance:1 tendency:1 zone:1 preparation:1 phenomenon:1 |
1,981 | 280 | Generalized Hopfield Networks and Nonlinear Optimization
Generalized Hopfield Networks
and
Nonlinear Optimization
Gintaras v. Reklaitis
Dept. of Chemical Eng.
Purdue University
W. Lafayette, IN. 47907
Athanasios G. Tsirukis 1
Dept. of Chemical Eng.
Purdue University
W. Lafayette, IN. 47907
Manoel F. Tenorio
Dept of Electrical Eng.
Purdue University
W. Lafayette, IN. 47907
ABSTRACT
A nonlinear neural framework, called the Generalized Hopfield
network, is proposed, which is able to solve in a parallel distributed
manner systems of nonlinear equations. The method is applied to the
general nonlinear optimization problem. We demonstrate GHNs
implementing the three most important optimization algorithms,
namely the Augmented Lagrangian, Generalized Reduced Gradient and
Successive Quadratic Programming methods. The study results in a
dynamic view of the optimization problem and offers a straightforward
model for the parallelization of the optimization computations, thus
significantly extending the practical limits of problems that can be
formulated as an optimization problem and which can gain from the
introduction of nonlinearities in their structure (eg. pattern recognition,
supervised learning, design of content-addressable memories).
1 To
whom correspondence should be addressed.
355
356
Reklaitis, Tsirukis and Tenorio
1 RELATED WORK
The ability of networks of highly interconnected simple nonlinear analog processors
(neurons) to solve complicated optimization problems was demonstrated in a series of
papers by Hopfield and Tank (Hopfield, 1984), (Tank, 1986).
The Hopfield computational model is almost exclusively applied to the solution of
combinatorially complex linear decision problems (eg. Traveling Salesman Problem).
Unfortunately such problems can not be solved with guaranteed quality, (Bruck, 1987),
getting trapped in locally optimal solutions.
Jeffrey and Rossner, (Jeffrey, 1986), extended Hopfield's technique to the nonlinear
unconstrained optimization problem, using Cauchy dynamics. Kennedy and Chua,
(Kennedy, 1988), presented an analog implementation of a network solving a nonlinear
optimization problem. The underlying optimization algorithm is a simple transformation
method, (Reklaitis, 1983), which is known to be relatively inefficient for large nonlinear
optimization problems.
2 LINEAR HOPFIELD NETWORK (LHN)
The computation in a Hopfield network is done by a collection of highly interconnected
simple neurons. Each processing element, i, is characterized by the activation level, Ui,
which is a function of the input received from the external environment, Ii, and the state
of the other neurons. The activation level of i is transmitted to the other processors, after
passing through a filter that converts Ui to a 0-1 binary value, Vi'
The time behavior of the system is described by the following model:
U'
~ T?V ? - - ' + I?
~ 'J J
R.
'
J
'
where Tij are the interconnection strengths. The network is characterized as linear,
because the neuron inputs appear linearly in the neuron's constitutive equation. The
steady-state of a Hopfield network corresponds to a local minimum of the corresponding
quadratic Lyapunov function:
E = -
~ ~ ~ TijV 1 Vj
,
J
+
~IiVi
'
V.
+
~ (;) So sjl(V)dV
"
If the matrix [Tij ] is symmetric, the steady-state values of Vi are binary These
observations tum the Hopfield network to a very useful discrete optimization tool.
Nonetheless, the linear structure poses two major limitations: The Lyapunov (objective)
function can only take a quadratic form, whereas the feasible region can only have a
hypercube geometry (-1 ~ Vi ~ 1). Therefore, the Linear Hopfield Network is limited
to solve optimization problems with quadratic objective function and linear constraints.
The general nonlinear optimization problem requires arbitrarily nonlinear neural
interactions.
Generalized Hopfield Networks and Nonlinear Optimization
3 THE NONLINEAR OPTIMIZATION PROBLEM
The general nonlinear optimization problem consists of a search for the values of the
independent variables Xi. optimizing a multivariable objective function so that some
conditions (equality. hi. and inequality. gj. constraints) are satisfied at the optimum.
optimize f (Xl. X2 ?
???? XII)
subject to
hi
aj ~ gj
4'
= 0
(X I. X 2. . ..? XII)
~
Xk
(Xl.
~
X2 ? ???? XII) ~
l
K < N
j = 1.2..... M
bj
xf
= 1.2 .....K.
k = 1.2.... .N
The influence of the constraint geometry on the shape of the objective function is
described in a unified manner by the Lagrangian Function:
L
=f -
vT h
The Vj variables ? also known as Lagrange multipliers. are unknown weighting
parameters to be specified. In the optimum. the following conditions are satisfied:
(N equations)
(1)
(K equations)
(2)
From (1) and (2) it is clear that the optimization problem is transformed into a nonlinear
equation solving problem. In a Generalized Hopfield Network each neuron represents an
independent variable. The nonlinear connectivity among them is determined by the
specific problem at hand and the implemented optimization algorithm. The network is
designed to relax from an initial state to a steady-state that corresponds to a locally
optimal solution of the problem.
Therefore. the optimization algorithms must be transformed into a dynamic model system of differential equations - that will dictate the nonlinear neural interactions.
4 OPTIMIZATION METHODS
Cauchy and Newton dynamics are the two most important unconstrained optimization
(equation solving) methods. adopted by the majority of the existing algorithms.
4.1
CAUCHY'S METHOD
This is the famous steepest descent algorithm. which tracks the direction of the largest
change in the value of the objective function. f. The "equation of motion" for a Cauchy
dynamic system is:
357
358
Reklaitis, Tsirukis and Tenorio
dx
dt
4.2
=
-VI
.%(0) = .%0
NEWTON'S METHOD
If second-order information is available, a more rapid convergence is produced using
Newton' s approximation:
.%(0) = .%0
The steepest descent dynamics are very efficient initially, producing large objectivevalue changes, but close to the optimum they become very small, significantly increasing
the convergence time. In contrast, Newton's method has a fast convergence close to the
optimum, but the optimization direction is uncontrollable. The Levenberg - Marquardt
heuristic, (Reklaitis, 1983), solves the problem by adopting Cauchy dynamics initially
and switch to Newton dynamics near the optimum. Figure 1 shows the optimization
trajectory of a Cauchy network. The algorithm converges to locally optimal solutions.
6 .1
r---------------~------__.
3 .3
" a
-3 . 0
-0 .
a
L---"'_-!..._--L..._-l-_--'--_..L....-..---'_---L_--L..._--'-_-'-------'
-6 . 11
-2.'
I .'
2.'
Figure 1: Convergence to Local Optima
~ . II
Generalized Hopfield Networks and Nonlinear Optimization
5 CONSTRAINED OPTIMIZATION
The constrained optimization algorithms attempt to conveniently manipulate the equality
and inequality constraints so that the problem is finally reduced to an unconstrained
optimization, which is solved using Cauchy's or Newton's methods. Three are the most
important constrained optimization algorithms: The Augmented Lagrangian, the
Generalized Reduced Gradient (GRG) and the Successive Quadratic Programming
(SQP). Corresponding Generalized Hopfield Networks will be developed for all of them.
5.1
TRANSFORMATION METHODS - AUGMENTED LAGRANGIAN
According to the transformation methods, a measure of the distance from the feasibility
region is attached to the objective function and the problem is solved as an unconstrained
optimization one. A transformation method was employed by Hopfield. These
algorithms are proved inefficient because of numerical difficulties implicitly embedded in
their structure, (Reklaitis, 1983). The Augmented Lagrangian is specifically designed to
avoid these problems. The transformed unconstrained objective function becomes:
P (x,a,t) =
I
(x) + R
L ?gj(x)
+ aj>2 - ay}
j
+
R
L ([hi(x)
+
'ti]2 -
't7 }
i
where R is a predetennined weighting factor, and aj' 't; the corresponding inequality equality Lagrange multipliers. The operator <Cf> returns a for a ~ O. Otherwise it
returns O.
The design of an Augmented Lagrangian GHN requires (N +K) neurons, where N is the
number of variables and K is the number of constraints. The neuron connectivity of a
GHN with Cauchy performance is described by the following model:
dx
dt
= -VxP = -VI - 2R
-da =
dt
+Va P = 2R <g
+
<g
0'> -
+ a>TVg - 2R
+ 'tfVh
2R a
where Vg and Vh are matrices, ego Vh = [Vh t ,
5.2
[h
... ,
Vh t ].
GENERALIZED REDUCED GRADIENT
According to the GRG method, K variables (basics, X) are determined by solving the K
nonlinear constraint equations, as functions of the rest (N -K) variables (non-basics, i).
Subsequently the problem is solved as a reduced-dimension unconstrained optimization
problem. Equations (1) and (2) are transformed to:
359
360
Reklaitis, Tsirukis and Tenorio
-
vj
"
,.. -1
= Vi - Vi (Vh)
Vh = 0
h(x) = 0
The constraint equations are solved using Newton's method. Note that the Lagrange
multipliers are explicitly eliminated. The design of a GRG GHN requires N neurons,
each one representing an independent variable. The neuron connectivity using Cauchy
dynamics for the unconstrained optimization is given by:
-dX =
cit
-vJ = - vI + vj ( Vh )-1
di
h(x)
=
0 (-+ dt
X (0)
=
Xo
= h (Vh )-1 )
Vh
(3)
(4)
System (3)-(4) is a differential - algebraic system, with an inherent sequential character:
for each small step towards lower objective values, produced by (3), the system of
nonlinear constraints should be solved, by relaxing equations (4) to a steady-state. The
procedure is repeated until both equations (3) and (4) reach a steady state.
5.3
SUCCESSIVE QUADRATIC PROGRAMMING
In the SQP algorithm equations (1) and (2) are simultaneously solved as a nonlinear
system of equations with both the independent variables, x, and the Lagrange mUltipliers,
v, as unknowns. The solution is detennined using Newton's method.
The design of an SQP GHN requires (N +K) neurons representing the independent
variables and the Lagrange multipliers. The connectivity of the network is determined by
the following state equations:
dz
dt
= ?
[V2 L
]
-1
(V L )
z(O) = Zo
where z is the augmented set of independent variables:
z = [x;v]
5.4
COMPARISON OF THE NETWORKS
The Augmented Lagrangian network is very easily programmed. Newton dynamics
should be used very carefully because the operator <a> is not smooth at a = O.
The GRG network requires K fewer neurons compared to the other networks. It requires
more programming effort because of the inversion of the constraint Jacobian.
Generalized Hopfield Networks and Nonlinear Optimization
The SQP network is algorithmically the most effective, because second order information
is used in the detennination of both the variables and the multipliers. It is the most
tedious to program because of the inversion of the Lagrange Hessian. All the GHNs are
proved to be stable, (Tsirukis, 1989). The following example was solved by all three
networks.
minimize f(x)
=
-Xl
X~ X~ 181
subject to
hi (x)
= xi
h 2 (x)
=
+ x~ +
x~ xi lf2
X3 -
1
-
=
13 = 0
0
Convergence was achieved by all the networks starting from both feasible and infeasible
initial points. Figures 2 and 3 depict the algorithmic superiority of the SQP network.
AU~HENTEO LA~RAN~IAN
& SQP
NET~ORKS
0~~~~~~~~~~~~~~1?~~~~-r'-~~~~~~-r~
,
-2
\
-4
\
\
-6
....
3c
S~P
-8
\
o
I
0
0
?
S~P
----- ? GRG
-10
> -12
~-14
~
~ -16
::>
... -18
~
o
-20
-22
AL
-24
-26
o
o
o
,-------~---;
-28
-30
o
0000000 0 0 0
00 000
~~~~~~~~~~~~~~~.-~~~~~~~~~~~~~~
1
2
3
4
5
f>
7
8
9
1 0
.2
.4
.6
TIME
.8
1.1 1.2 1.4 1.b 1.8 2 .?
TIME
Figure 2. Feasible Initial State.
Figure 3. Infeasible Initial State.
6 OPTIMIZATION & PARALLEL COMPUTATION
The presented model can be directly translated into a parallel nonlinear optimizer nonlinear equation solver - which efficiently distributes the computational burden to a
large number of digital processors (at most N+K). Each one of them corresponds to an
optimization variable, continuously updated by numerically integrating the state
equations:
x~r+l)
=
~ (x(r) ? x(r+l) )
361
362
Reklaitis, Tsirukis and Tenorio
where 4> depends on the optimization algorithm and the integration method. After each
update the new value is communicated to the network.
The presented algorithm has some unique features: The state equations are differentials
of the same function, the Lagrangian. Therefore, a simple integration method (eg.
explicit) can be used for the steady-state computation. Also, the integration in each
processor can be done asynchronously, independent of the state of the other processors.
Thus, the algorithm is robust to intercommunication and execution delays.
Acknowledgements
An extended version of this work has appeared in (fsirukis, 1990). The authors wish to
thank M.I. T. Press Journals for their permission to publish it in the present form.
References
Bruck, J. and J. Goodman (1988). On the Power of Neural Networks for Solving Hard
Problems. Neural Infonnation Processing Systems, D2. Anderson (ed.), American
Institute of Physics, New York, NY, 137-143.
Hopfield J.1. (1984), Neurons with Graded Response have Collective Computational
Properties like those of Two-state Neurons, Proc. Natl. Acad. Sci. USA, vol. 81, 30883092.
Jeffrey, W. and R. Rosner (1986), Neural Network Processing as a Tool for Function
Optimization, Neural Networks for Computing. J.S. Denker (ed.), American Institute of
Physics, New York, NY, 241-246.
Kennedy, M.P. and L.O. Chua (1988), Neural Networks for Nonlinear Programming,
IEEE Transactions on Circuits and Systems, vol. 35, no. 5, pp. 554-562.
Reklaitis, G.V., A. Ravindran and K.M. Ragsdell (1983), Engineering Optimization:
Methods and Applications. Wiley - Interscience.
Tank, D.W. and JJ. Hopfield (1986), Simple "Neural" Optimization Networks: An AID
Converter. Signal Decision Circuit. and a Linear Programming Circuit. IEEE
Transactions on circuits and systems, CAS-33, no. 5.
Tsirukis. A. G., Reklaitis, G.V., and Tenorio, M.F. (1989). Computational properties of
Generalized Hopfie/d Networks applied to Nonlinear Optimization. Tech. Rep. lREE
89-69, School of Electrical Engineering, Purdue University.
Tsirukis, A. G., Reklaitis, G.V., and Tenorio, M.F. (1990). Nonlinear Optimization using
Generalized Hopfie/d Networks. Neural Computation, vol. I, no. 4.
PART V:
OTHER APPLICATIONS
| 280 |@word version:1 inversion:2 tedious:1 d2:1 eng:3 initial:4 series:1 exclusively:1 t7:1 existing:1 marquardt:1 activation:2 dx:3 must:1 numerical:1 shape:1 designed:2 depict:1 update:1 fewer:1 xk:1 steepest:2 chua:2 successive:3 differential:3 become:1 consists:1 interscience:1 manner:2 ravindran:1 rapid:1 behavior:1 solver:1 increasing:1 becomes:1 underlying:1 circuit:4 tvg:1 developed:1 unified:1 transformation:4 ti:1 appear:1 producing:1 superiority:1 engineering:2 local:2 limit:1 acad:1 au:1 relaxing:1 limited:1 programmed:1 lafayette:3 practical:1 unique:1 x3:1 communicated:1 procedure:1 addressable:1 significantly:2 dictate:1 integrating:1 close:2 operator:2 influence:1 optimize:1 lagrangian:8 demonstrated:1 dz:1 straightforward:1 starting:1 updated:1 programming:6 element:1 ego:1 recognition:1 electrical:2 solved:8 region:2 ran:1 environment:1 ui:2 dynamic:10 solving:5 translated:1 easily:1 hopfield:20 zo:1 fast:1 effective:1 heuristic:1 solve:3 constitutive:1 relax:1 interconnection:1 otherwise:1 ability:1 asynchronously:1 net:1 interconnected:2 interaction:2 detennined:1 getting:1 convergence:5 optimum:6 extending:1 sqp:6 converges:1 pose:1 school:1 received:1 solves:1 implemented:1 lyapunov:2 direction:2 filter:1 subsequently:1 implementing:1 uncontrollable:1 algorithmic:1 bj:1 major:1 optimizer:1 proc:1 infonnation:1 largest:1 combinatorially:1 tool:2 avoid:1 ghn:4 tech:1 contrast:1 initially:2 transformed:4 tank:3 among:1 constrained:3 integration:3 eliminated:1 represents:1 inherent:1 simultaneously:1 iivi:1 detennination:1 geometry:2 jeffrey:3 attempt:1 ghns:2 highly:2 natl:1 delay:1 physic:2 continuously:1 connectivity:4 satisfied:2 external:1 american:2 inefficient:2 return:2 nonlinearities:1 explicitly:1 vi:8 depends:1 view:1 orks:1 parallel:3 complicated:1 minimize:1 efficiently:1 famous:1 produced:2 trajectory:1 kennedy:3 processor:5 reach:1 ed:2 tsirukis:8 nonetheless:1 pp:1 di:1 gain:1 proved:2 carefully:1 athanasios:1 tum:1 dt:5 supervised:1 response:1 intercommunication:1 done:2 anderson:1 until:1 traveling:1 hand:1 nonlinear:27 quality:1 aj:3 usa:1 multiplier:6 equality:3 chemical:2 symmetric:1 eg:3 steady:6 levenberg:1 multivariable:1 generalized:13 ay:1 demonstrate:1 motion:1 attached:1 analog:2 numerically:1 unconstrained:7 stable:1 gj:3 optimizing:1 inequality:3 binary:2 arbitrarily:1 rep:1 vt:1 transmitted:1 minimum:1 reklaitis:11 employed:1 signal:1 ii:2 smooth:1 xf:1 characterized:2 offer:1 manipulate:1 feasibility:1 va:1 basic:2 publish:1 rosner:1 adopting:1 achieved:1 whereas:1 addressed:1 goodman:1 parallelization:1 rest:1 subject:2 near:1 switch:1 converter:1 effort:1 sjl:1 algebraic:1 passing:1 hessian:1 york:2 jj:1 tij:2 useful:1 clear:1 locally:3 cit:1 reduced:5 trapped:1 algorithmically:1 track:1 xii:3 discrete:1 vol:3 convert:1 almost:1 decision:2 hi:4 guaranteed:1 correspondence:1 quadratic:6 strength:1 constraint:9 x2:2 relatively:1 lf2:1 according:2 character:1 dv:1 xo:1 equation:19 salesman:1 adopted:1 available:1 denker:1 v2:1 permission:1 cf:1 newton:9 graded:1 hypercube:1 objective:8 gradient:3 distance:1 thank:1 sci:1 majority:1 whom:1 cauchy:9 unfortunately:1 design:4 implementation:1 collective:1 unknown:2 neuron:14 observation:1 purdue:4 descent:2 extended:2 namely:1 specified:1 manoel:1 able:1 pattern:1 appeared:1 program:1 memory:1 power:1 difficulty:1 bruck:2 representing:2 vh:9 acknowledgement:1 embedded:1 limitation:1 grg:5 vg:1 digital:1 lhn:1 infeasible:2 l_:1 institute:2 distributed:1 dimension:1 author:1 collection:1 transaction:2 implicitly:1 xi:3 search:1 robust:1 ca:1 complex:1 vj:5 da:1 linearly:1 repeated:1 augmented:7 ny:2 wiley:1 aid:1 explicit:1 wish:1 xl:3 weighting:2 jacobian:1 ian:1 specific:1 burden:1 sequential:1 execution:1 tenorio:7 conveniently:1 lagrange:6 corresponds:3 formulated:1 towards:1 content:1 feasible:3 change:2 hard:1 determined:3 specifically:1 distributes:1 called:1 la:1 dept:3 |
1,982 | 2,800 | Convex Neural Networks
Yoshua Bengio, Nicolas Le Roux, Pascal Vincent, Olivier Delalleau, Patrice Marcotte
Dept. IRO, Universit?e de Montr?eal
P.O. Box 6128, Downtown Branch, Montreal, H3C 3J7, Qc, Canada
{bengioy,lerouxni,vincentp,delallea,marcotte}@iro.umontreal.ca
Abstract
Convexity has recently received a lot of attention in the machine learning
community, and the lack of convexity has been seen as a major disadvantage of many learning algorithms, such as multi-layer artificial neural
networks. We show that training multi-layer neural networks in which the
number of hidden units is learned can be viewed as a convex optimization
problem. This problem involves an infinite number of variables, but can be
solved by incrementally inserting a hidden unit at a time, each time finding
a linear classifier that minimizes a weighted sum of errors.
1
Introduction
The objective of this paper is not to present yet another learning algorithm, but rather to point
to a previously unnoticed relation between multi-layer neural networks (NNs),Boosting (Freund and Schapire, 1997) and convex optimization. Its main contributions concern the mathematical analysis of an algorithm that is similar to previously proposed incremental NNs, with
L1 regularization on the output weights. This analysis helps to understand the underlying
convex optimization problem that one is trying to solve.
This paper was motivated by the unproven conjecture (based on anecdotal experience) that
when the number of hidden units is ?large?, the resulting average error is rather insensitive to
the random initialization of the NN parameters. One way to justify this assertion is that to really stay stuck in a local minimum, one must have second derivatives positive simultaneously
in all directions. When the number of hidden units is large, it seems implausible for none of
them to offer a descent direction. Although this paper does not prove or disprove the above
conjecture, in trying to do so we found an interesting characterization of the optimization
problem for NNs as a convex program if the output loss function is convex in the NN output and if the output layer weights are regularized by a convex penalty. More specifically,
if the regularization is the L1 norm of the output layer weights, then we show that a ?reasonable? solution exists, involving a finite number of hidden units (no more than the number
of examples, and in practice typically much less). We present a theoretical algorithm that
is reminiscent of Column Generation (Chv?
atal, 1983), in which hidden neurons are inserted
one at a time. Each insertion requires solving a weighted classification problem, very much
like in Boosting (Freund and Schapire, 1997) and in particular Gradient Boosting (Mason
et al., 2000; Friedman, 2001).
Neural Networks, Gradient Boosting, and Column Generation
Denote x
? ? Rd+1 the extension of vector x ? Rd with one element with value 1. What
we call ?Neural
Pm Network? (NN) here is a predictor for supervised learning of the form
y?(x) =
i=1 wi hi (x) where x is an input vector, hi (x) is obtained from a linear discriminant function hi (x) = s(vi ? x
?) with e.g. s(a) = sign(a), or s(a) = tanh(a) or
s(a) = 1+e1?a . A learning algorithm must specify how to select m, the wi ?s and the vi ?s.
The classical solution (Rumelhart, Hinton and Williams, 1986) involves (a) selecting a loss
function Q(?
y , y) that specifies how to penalize for mismatches between y?(x) and the observed y ?s (target output or target class), (b) optionally selecting a regularization penalty that
favors ?small? parameters, and (c) choosing a method to approximately minimize the sum of
the losses on the training data D = {(x1 , y1 ), . . . , (xn , yn )} plus the regularization penalty.
Note that in this formulation, an output non-linearity can still be used, by inserting it in the
loss function Q. Examples of such loss functions are the quadratic loss ||?
y ? y||2 , the hinge
loss max(0, 1 ? y y?) (used in SVMs), the cross-entropy loss ?y log y? ? (1 ? y) log(1 ? y?)
(used in logistic regression), and the exponential loss e?yy? (used in Boosting).
Gradient Boosting has been introduced in (Friedman, 2001) and (Mason et al., 2000) as a
non-parametric greedy-stagewise supervised learning algorithm in which one adds a function
at a time to the current solution y?(x), in a steepest-descent fashion, to form an additive model
as above but with the functions hi typically taken in other kinds of sets of functions, such as
those obtained with decision trees. In a stagewise approach, when the (m + 1)-th basis h m+1
is added, only wm+1 is optimized (by a line search), like in matching pursuit algorithms.Such
a greedy-stagewise approach is also at the basis of Boosting algorithms (Freund and Schapire,
1997), which is usually applied using decision trees as bases and Q the exponential loss.
It may be difficult to minimize exactly for wm+1 and hm+1 when the previous bases and
weights are fixed, so (Friedman, 2001) proposes to ?follow the gradient? in function space,
i.e., look for a base learner hm+1 that is best correlated with the gradient of the average
loss on the y?(xi ) (that would be the residue y?(xi ) ? yi in the case of the square loss). The
algorithm analyzed here also involves maximizing the correlation between Q0 (the derivative
of Q with respect to its first argument, evaluated on the training predictions) and the next
basis hm+1 . However, we follow a ?stepwise?, less greedy, approach, in which all the output
weights are optimized at each step, in order to obtain convergence guarantees.
Our approach adapts the Column Generation principle (Chv?
atal, 1983), a decomposition
technique initially proposed for solving linear programs with many variables and few constraints. In this framework, active variables, or ?columns?, are only generated as they are
required to decrease the objective. In several implementations, the column-generation subproblem is frequently a combinatorial problem for which efficient algorithms are available.
In our case, the subproblem corresponds to determining an ?optimal? linear classifier.
2
Core Ideas
Informally, consider the set H of all possible hidden unit functions (i.e., of all possible hidden
unit weight vectors vi ). Imagine a NN that has all the elements in this set as hidden units. We
might want to impose precision limitations on those weights to obtain either a countable or
even a finite set. For such a NN, we only need to learn the output weights. If we end up with
a finite number of non-zero output weights, we will have at the end an ordinary feedforward
NN. This can be achieved by using a regularization penalty on the output weights that yields
sparse solutions, such as the L1 penalty. If in addition the loss function is convex in the output
layer weights (which is the case of squared error, hinge loss, -tube regression loss, and
logistic or softmax cross-entropy), then it is easy to show that the overall training criterion
is convex in the parameters (which are now only the output weights). The only problem is
that there are as many variables in this convex program as there are elements in the set H,
which may be very large (possibly infinite). However, we find that with L 1 regularization,
a finite solution is obtained, and that such a solution can be obtained by greedily inserting
one hidden unit at a time. Furthermore, it is theoretically possible to check that the global
optimum has been reached.
Definition 2.1. Let H be a set of functions from an input space X to R. Elements of H
can be understood as ?hidden units? in a NN. Let W be the Hilbert space of functions from
H to R, with an inner product denoted by a ? b for a, b ? W . An element of W can be
understood as the output weights vector in a neural network. Let h(x) : H ? R the function
that maps any element hi of H to hi (x). h(x) can be understood as the vector of activations
of hidden units when input x is observed. Let w ? W represent a parameter (the output
weights). The NN prediction is denoted y?(x) = w ? h(x). Let Q : R ? R ? R be a
cost function convex in its first argument that takes a scalar prediction y?(x) and a scalar
target value y and returns a scalar cost. This is the cost to be minimized on example pair
(x, y). Let D = {(xi , yi ) : 1 ? i ? n} a training set. Let ? : W ? R be a convex
regularization functional that penalizes for the choice of more ?complex? parameters (e.g.,
?(w) = ?||w||1 according to a 1-norm in W , if H is countable). We define the convex NN
criterion C(H, Q, ?, D, w) with parameter w as follows:
n
X
C(H, Q, ?, D, w) = ?(w) +
Q(w ? h(xt ), yt ).
(1)
t=1
The following is a trivial lemma, but it is conceptually very important as it is the basis for the
rest of the analysis in this paper.
Lemma 2.2. The convex NN cost C(H, Q, ?, D, w) is a convex function of w.
Proof. Q(w ? h(xt ), yt ) is convex in w and ? is convex in w, by the above construction. C
is additive in Q(w ? h(xt ), yt ) and additive in ?. Hence C is convex in w.
Note that there are no constraints in this convex optimization program, so that at the global
minimum all the partial derivatives of C with respect to elements of w cancel.
Let |H| be the cardinality of the set H. If it is not finite, it is not obvious that an optimal
solution can be achieved in finitely many iterations.
Lemma 2.2 says that training NNs from a very large class (with one or more hidden layer)
can be seen as convex optimization problems, usually in a very high dimensional space, as
long as we allow the number of hidden units to be selected by the learning algorithm.
By choosing a regularizer that promotes sparse solutions, we obtain a solution that has a
finite number of ?active? hidden units (non-zero entries in the output weights vector w).
This assertion is proven below, in theorem 3.1, for the case of the hinge loss.
However, even if the solution involves a finite number of active hidden units, the convex
optimization problem could still be computationally intractable because of the large number
of variables involved. One approach to this problem is to apply the principles already successfully embedded in Gradient Boosting, but more specifically in Column Generation (an
optimization technique for very large scale linear programs), i.e., add one hidden unit at a
time in an incremental fashion. The important ingredient here is a way to know that we
have reached the global optimum, thus not requiring to actually visit all the possible
hidden units. We show that this can be achieved as long as we can solve the sub-problem
of finding a linear classifier that minimizes the weighted sum of classification errors. This
can be done exactly only on low dimensional data sets but can be well approached using
weighted linear SVMs, weighted logistic regression, or Perceptron-type algorithms.
Another idea (not followed up here) would be to consider first a smaller set H 1 , for which
the convex problem can be solved in polynomial time, and whose solution can theoretically
be selected as initialization for minimizing the criterion C(H2 , Q, ?, D, w), with H1 ? H2 ,
and where H2 may have infinite cardinality (countable or not). In this way we could show
that we can find a solution whose cost satisfies C(H2 , Q, ?, D, w) ? C(H1 , Q, ?, D, w),
i.e., is at least as good as the solution of a more restricted convex optimization problem. The
second minimization can be performed with a local descent algorithm, without the necessity
to guarantee that the global optimum will be found.
3
Finite Number of Hidden Neurons
In this section we consider the special case with Q(?
y , y) = max(0, 1 ? y y?) the hinge loss,
and L1 regularization, and we show that the global optimum of the convex cost involves at
most n + 1 hidden neurons, using an approach already exploited in (R?atsch, Demiriz and
Bennett, 2002) for L1 -loss regression Boosting with L1 regularization of output weights.
The training criterion is C(w) = Kkwk1 +
n
X
max (0, 1 ? yt w ? h(xt )). Let us rewrite
t=1
this cost function as the constrained
optimization
problem:
n
X
yt [w ? h(xt )] ? 1 ? ?t
min L(w, ?) = Kkwk1 +
?t s.t.
and ?t ? 0, t = 1, . . . , n
w,?
t=1
(C1 )
(C2 )
Using a standard technique, the above program can be recast as a linear program. Defining ? = (?1 , . . . , ?n ) the vector of Lagrangian multipliers for the constraints C1 , its dual
problem (P ) takes the form (in the case of a
finite number J of base learners):
n
X
? ? Zi ? K ? 0, i ? I
(P ) :
max
?t s.t.
and ?t ? 1, t = 1, . . . , n
?
t=1
with (Zi )t = yt hi (xt ). In the case of a finite number J of base learners, I = {1, . . . , J}. If
the number of hidden units is uncountable, then I is a closed bounded interval of R.
Such an optimization problem satisfies all the conditions needed for using Theorem 4.2
from (Hettich and Kortanek, 1993). Indeed:
? I is compact
Pn(as a closed bounded interval of R);
? F : ? 7? t=1 ?t is a concave function (it is even a linear function);
? g : (?, i) 7? ? ? Zi ? K is convex in ? (it is actually linear in ?);
? ?(P ) ? n (therefore finite) (?(P ) is the largest value of F satisfying the constraints);
? such that g(?,
? ij ) < 0 for
? for every set of n + 1 points i0 , . . . , in ? I , there exists ?
? = 0 since K > 0).
j = 0, . . . , n (one can take ?
Then, from Theorem 4.2 from (Hettich and Kortanek, 1993), the following theorem holds:
Theorem 3.1. The solution of (P ) can be attained with constraints C20 and only n + 1 constraints C10 (i.e., there exists a subset of n+1 constraints C10 giving rise to the same maximum
as when using the whole set of constraints). Therefore, the primal problem associated is the
minimization of the cost function of a NN with n + 1 hidden neurons.
4
Incremental Convex NN Algorithm
In this section we present a stepwise algorithm to optimize a NN, and show that there is a criterion that allows to verify whether the global optimum has been reached. This is a specialization of minimizing C(H, Q, ?, D, w), with ?(w) = ?||w||1 and H = {h : h(x) = s(v ? x
?)}
is the set of soft or hard linear classifiers (depending on choice of s(?)).
Algorithm ConvexNN(D,Q,?,s)
Input: training set D = {(x1 , y1 ), . . . , (xn , yn )}, convex loss function Q, and scalar
regularization penalty ?. s is either the sign function orPthe tanh function.
(1) Set v1 = (0, 0, . . . , 1) and select w1 = argminw1 t Q(w1 s(1), yt ) + ?|w1 |.
(2) Set i = 2.
(3) Repeat
Pi?1
(4)
Let qt = Q0 ( j=1 wj hj (xt ), yt )
(5)
If s = sign
(5a)
train linear classifier hi (x) = sign(vi ? x
?) with examples
P{(xt , sign(qt ))}
and errors weighted by |qt |, t = 1 . . . n (i.e., maximize t qt hi (xt ))
(5b)
else (s = tanh)
P
(5c)
?) to maximize t qt hi (xt ).
Ptrain linear classifier hi (x) = tanh(vi ? x
(6)
If t qt hi (xt ) < ?, stop.
(7)
Select w1 , . . . , wi (and optionally v2 , . . . , vi ) minimizing (exactly or
P
Pi
P
approximately) C = t Q( j=1 wj hj (xt ), yt ) + ? j=1 |wj |
?C
= 0 for j = 1 . . . i.
such that ?w
j
Pi
(8) Return the predictor y?(x) = j=1 wj hj (x).
A key property of the above algorithm is that, at termination, the global optimum is reached,
i.e., no hidden unit (linear classifier) can improve the objective. In the case where s = sign,
we obtain a Boosting-like
algorithm, i.e., it involves finding a classifier which minimizes the
P
weighted cost t qt sign(v ? x?t ).
Theorem 4.1. Algorithm ConvexNN
P stops when it reaches the global optimum of
C(w) = t Q(w ? h(xt ), yt ) + ?||w||1 .
Proof. Let w be the output weights vector when the algorithm stops. Because the set of
hidden units H we consider is such that when h is in H, ?h is also in H, we can assume
all weights to be non-negative. By contradiction, if w 0 6= w is the global optimum, with
C(w0 ) < C(w), then, since C is convex in the output weights, for any ? (0, 1), we have
C(w0 + (1 ? )w) ? C(w 0 ) + (1 ? )C(w) < C(w). Let w = w0 + (1 ? )w. For
small enough, we can assume all weights in w that are strictly positive to be also strictly
positive in w . Let us denote by Ip the set of strictly positive weights in w (and w ), by Iz
the set of weights set to zero in w but to a non-zero value in w , and by ?k the difference
w,k ? wk in the weight of hidden unit hk between w and w . We can assume ?j < 0 for
j ? Iz , because instead of setting a small positive weight to hj , one can decrease the weight
of ?hj by the same amount, which will give either the same cost, or possibly a lower one
when the weight of ?hj is positive. With o() denoting a quantity such that ?1 o() ? 0
when ? 0, the difference ? (w) =X
C(w ) ? C(w) can now be written:
? (w) = ? (kw k1 ? kwk1 ) +
(Q(w ? h(xt ), yt ) ? Q(w ? h(xt ), yt ))
t
?
= ??
=
X
?i +
i?Ip
X
X
?
??j ? +
qt ?i hi (xt )
t
X
X
?C
?i
(w) +
?wi
0+
X
i?Ip
=
j?Iz
??i +
i?Ip
=
X
XX
t
!
k
+
???j +
???j +
???j +
X
qt ?j hj (xt )
qt ?j hj (xt )
!
qt ?j hj (xt )
t
t
t
j?Iz
X
X
j?Iz
j?Iz
X
(Q0 (w ? h(xt ), yt )?k hk (xt )) + o()
!
!
+ o()
+ o()
+ o()
?C
since for i ? Ip , thanks to step (7) of the algorithm, we have ?w
(w) = 0. Thus the
i
?1
inequality ? (w) < 0 rewrites into
!
X
X
?1
?j ?? +
qt hj (xt ) + ?1 o() < 0
t
j?Iz
which, when ? 0, yields (note that ?1 ?j does not depend
! on since ?j is linear in ):
X
X
?1 ?j ?? +
qt hj (xt ) ? 0
(2)
j?Iz
t
But, hi being the optimal classifier chosen in step (5a) or (5c), all hidden units hj verify
P
P
P
?1
?j (?? + t qt hj (xt )) > 0 (since
t qt hj (xt ) ?
t qt hi (xt ) < ? and ?j ? Iz ,
?j < 0), contradicting eq. 2.
(Mason et al., 2000) prove a related global convergence result for the AnyBoost algorithm,
a non-parametric Boosting algorithm that is also similar to Gradient Boosting (Friedman,
2001). Again, this requires solving as a sub-problem an exact minimization to find a function
hi ? H that is maximally correlated with the gradient Q0 on the output. We now show a
simple procedure to select a hyperplane with the best weighted classification error.
Exact Minimization
In step (5a) we are required to find a linear classifier that minimizes the weighted sum of
classification errors. Unfortunately, this is an NP-hard problem (w.r.t. d, see theorem 4
in (Marcotte and Savard, 1992)). However, an exact solution can be easily found in O(n 3 )
computations for d = 2 inputs.
Proposition 4.2. Finding a linear classifier that minimizes the weighted sum of classification
error can be achieved in O(n3 ) steps when the input dimension is d = 2.
P
Proof. We want to maximize i ci sign(u ? xi + b) with respect to u and b, the ci ?s being
in R. Consider u fixed and sort the xi ?s according to their dot product with u and denote r
the function which maps i to r(i) such that xr(i) is in i-th position in the sort. Depending on
Pn
Pk
the value of b, we will have n + 1 possible sums, respectively ? i=1 cr(i) + i=k+1 cr(i) ,
k = 0, . . . , n. It is obvious that those sums only depend on the order of the products u ? x i ,
i = 1, . . . , n. When u varies smoothly on the unit circle, as the dot product is a continuous
function of its arguments, the changes in the order of the dot products will occur only when
there is a pair (i, j) such that u ? xi = u ? xj . Therefore, there are at most as many order
changes as there are pairs of different points, i.e., n(n ? 1)/2. In the case of d = 2, we
can enumerate all the different angles for which there is a change, namely a1 , . . . , az with
z ? n(n?1)
. We then need to test at least one u = [cos(?), sin(?)] for each interval ai <
2
? < ai+1 , and also one u for ? < a1 , which makes a total of n(n?1)
possibilities.
2
It is possible to generalize this result in higher dimensions, and as shown in (Marcotte and
Savard, 1992), one can achieve O(log(n)nd ) time.
Algorithm 1 Optimal linear classifier search
Pn
Maximizing i=1 ci ?(sign(w ? xi ), yi ) in dimension 2
(1) for i = 1, . . . , n for j = i + 1, . . . , n
(3)
?i,j = ?(xi , xj ) + ?2 where ?(xi , xj ) is the angle between xi and xj
(6) sort the ?i,j in increasing order
(7) w0 = (1, 0)
(8) for k = 1, . . . , n(n?1)
2
k?1
(9)
wk = (cos ?i,j , sin ?i,j ), uk = wk +w
2
(10)
sort the xi according
the value of uk ? xi
Pto
n
(11)
compute S(uk ) = i=1 ci ?(uk ? xi ), yi )
(12) output: argmaxuk S
Approximate Minimization
For data in higher dimensions, the exact minimization scheme to find the optimal linear
classifier is not practical. Therefore it is interesting to consider approximate schemes for
obtaining a linear classifier with weighted costs. Popular schemes for doing so are the linear
SVM (i.e., linear classifier with hinge loss), the logistic regression classifier, and variants of
the Perceptron algorithm. In that case, step (5c) of the algorithm is not an exact minimization,
and one cannot guarantee that the global optimum will be reached. However, it might be
reasonable to believe that finding a linear classifier by minimizing a weighted hinge loss
should yield solutions close to the exact minimization. Unfortunately, this is not generally
true, as we have found out on a simple toy data set described below. On the other hand,
if in step (7) one performs an optimization not only of the output weights w j (j ? i) but
also of the corresponding weight vectors vj , then the algorithm finds a solution close to the
global optimum (we could only verify this on 2-D data sets, where the exact solution can be
computed easily). It means that at the end of each stage, one first performs a few training
iterations of the whole NN (for the hidden units j ? i) with an ordinary gradient descent
mechanism (we used conjugate gradients but stochastic gradient descent would work too),
optimizing the wj ?s and the vj ?s, and then one fixes the vj ?s and obtains the optimal wj ?s for
these vj ?s (using a convex optimization procedure). In our experiments we used a quadratic
Q, for which the optimization of the output weights can be done with a neural network, using
the outputs of the hidden layer as inputs.
Let us consider now a bit more carefully what it means to tune the vj ?s in step (7). Indeed,
changing the weight vector vj of a selected hidden neuron to decrease the cost is equivalent
to a change in the output weights w?s. More precisely, consider the step in which the
value of vj becomes vj0 . This is equivalent to the following operation on the w?s, when wj
is the corresponding output weight value: the output weight associated with the value v j of
a hidden neuron is set to 0, and the output weight associated with the value vj0 of a hidden
neuron is set to wj . This corresponds to an exchange between two variables in the convex
program. We are justified to take any such step as long as it allows us to decrease the cost
C(w). The fact that we are simultaneously making such exchanges on all the hidden units
when we tune the vj ?s allows us to move faster towards the global optimum.
Extension to multiple outputs
The multiple outputs case is more
P involved than the single-output case because it is not
enough to check the condition t ht qt > ?. Consider a new hidden neuron whose output is
hi when the input is xi . Let us also denote ? = [?1 , . . . , ?no ]0 the vector of output weights
between the new hidden neuron and the no output neurons. The gradient with respect to ?j
P
?C
is gj = ??
= t ht qtj ? ?sign(?j ) with qtj the value of the j-th output neuron with input
j
P
xt . This means that if, for a given j , we have | t ht qtj | < ?, moving
P ?j away from 0 can
only increase the cost. Therefore, the right quantity to consider is (| t ht qtj | ? ?)+ .
P P
2
We must therefore find argmaxv j (| t ht qtj | ? ?)+ . As before, this sub-problem is not
convex, but it is not as obvious how to approximate
it by a convex problem. The stopping
P
criterion becomes: if there is no j such that | t ht qtj | > ?, then all weights must remain
equal to 0 and a global minimum is reached.
Experimental Results
We performed experiments on the 2-D double moon toy dataset (as used in (Delalleau, Bengio and Le Roux, 2005)), to be able to compare with the exact version of the algorithm. In
these experiments, Q(w ? h(xt ), yt ) = [w ? h(xt ) ? yt ]2 . The set-up is the following:
? Select a new linear classifier, either (a) the optimal one or (b) an approximate using logistic
regression.
? Optimize the output weights using a convex optimizer.
? In case (b), tune both input and output weights by conjugate gradient descent on C and
finally re-optimize the output weights using LASSO regression.
? Optionally, remove neurons whose output weight has been set to 0.
Using the approximate algorithm yielded for 100 training examples an average penalized
(? = 1) squared error of 17.11 (over 10 runs), an average test classification error of 3.68%
and an average number of neurons of 5.5 . The exact algorithm yielded a penalized squared
error of 8.09, an average test classification error of 5.3%, and required 3 hidden neurons. A
penalty of ? = 1 was nearly optimal for the exact algorithm whereas a smaller penalty further
improved the test classification error of the approximate algorithm. Besides, when running
the approximate algorithm for a long time, it converges to a solution whose quadratic error is
extremely close to the one of the exact algorithm.
5
Conclusion
We have shown that training a NN can be seen as a convex optimization problem, and have
analyzed an algorithm that can exactly or approximately solve this problem. We have shown
that the solution with the hinge loss involved a number of non-zero weights bounded by
the number of examples, and much smaller in practice. We have shown that there exists a
stopping criterion to verify if the global optimum has been reached, but it involves solving a
sub-learning problem involving a linear classifier with weighted errors, which can be com-
putationally hard if the exact solution is sought, but can be easily implemented for toy data
sets (in low dimension), for comparing exact and approximate solutions.
The above experimental results are in agreement with our initial conjecture: when there are
many hidden units we are much less likely to stall in the optimization procedure, because
there are many more ways to descend on the convex cost C(w). They also suggest, based
on experiments in which we can compare with the exact sub-problem minimization, that
applying Algorithm ConvexNN with an approximate minimization for adding each hidden
unit while continuing to tune the previous hidden units tends to lead to fast convergence
to the global minimum. What can get us stuck in a ?local minimum? (in the traditional sense,
i.e., of optimizing w?s and v ?s together) is simply the inability to find a new hidden unit
weight vector that can improve the total cost (fit and regularization term) even if there
exists one.
Note that as a side-effect of the results presented here, we have aPsimple way to train neural
y (xt ), yt )sign(vi xt )
networks with hard-threshold hidden units, since increasing t Q0 (?
can be either achieved exactly (at great price) or approximately (e.g. by using a cross-entropy
or hinge loss on the corresponding linear classifier).
Acknowledgments
The authors thank the following for support: NSERC, MITACS, and the Canada Research
Chairs. They are also grateful for the feedback and stimulating exchanges with Sam Roweis,
Nathan Srebro, and Aaron Courville.
References
Chv?
atal, V. (1983). Linear Programming. W.H. Freeman.
Delalleau, O., Bengio, Y., and Le Roux, N. (2005). Efficient non-parametric function induction
in semi-supervised learning. In Cowell, R. and Ghahramani, Z., editors, Proceedings of AISTATS?2005, pages 96?103.
Freund, Y. and Schapire, R. E. (1997). A decision theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Science, 55(1):119?139.
Friedman, J. (2001). Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29:1180.
Hettich, R. and Kortanek, K. (1993). Semi-infinite programming: theory, methods, and applications.
SIAM Review, 35(3):380?429.
Marcotte, P. and Savard, G. (1992). Novel approaches to the discrimination problem. Zeitschrift fr
Operations Research (Theory), 36:517?545.
Mason, L., Baxter, J., Bartlett, P. L., and Frean, M. (2000). Boosting algorithms as gradient descent.
In Advances in Neural Information Processing Systems 12, pages 512?518.
R?atsch, G., Demiriz, A., and Bennett, K. P. (2002). Sparse regression ensembles in infinite and finite
hypothesis spaces. Machine Learning.
Rumelhart, D., Hinton, G., and Williams, R. (1986). Learning representations by back-propagating
errors. Nature, 323:533?536.
| 2800 |@word version:1 polynomial:1 seems:1 norm:2 nd:1 termination:1 decomposition:1 initial:1 necessity:1 selecting:2 denoting:1 current:1 com:1 comparing:1 activation:1 yet:1 must:4 reminiscent:1 written:1 additive:3 remove:1 discrimination:1 greedy:4 selected:3 steepest:1 core:1 characterization:1 boosting:15 mathematical:1 c2:1 anyboost:1 prove:2 theoretically:2 indeed:2 frequently:1 multi:3 freeman:1 cardinality:2 chv:3 increasing:2 becomes:2 xx:1 underlying:1 linearity:1 bounded:3 what:3 pto:1 kind:1 minimizes:5 finding:5 guarantee:3 every:1 concave:1 exactly:5 universit:1 classifier:20 uk:4 unit:29 yn:2 positive:6 before:1 understood:3 local:3 tends:1 zeitschrift:1 approximately:4 might:2 plus:1 initialization:2 co:2 practical:1 acknowledgment:1 practice:2 xr:1 procedure:3 matching:1 suggest:1 get:1 cannot:1 close:3 applying:1 optimize:3 equivalent:2 map:2 lagrangian:1 yt:16 maximizing:2 williams:2 attention:1 convex:35 qc:1 roux:3 contradiction:1 qtj:6 annals:1 target:3 imagine:1 construction:1 exact:14 olivier:1 programming:2 hypothesis:1 agreement:1 lerouxni:1 element:7 rumelhart:2 satisfying:1 putationally:1 observed:2 inserted:1 subproblem:2 solved:2 descend:1 wj:8 decrease:4 convexity:2 insertion:1 depend:2 solving:4 rewrite:2 grateful:1 learner:3 basis:4 easily:3 regularizer:1 train:2 fast:1 artificial:1 approached:1 choosing:2 whose:5 solve:3 delalleau:3 say:1 favor:1 statistic:1 demiriz:2 h3c:1 patrice:1 ip:5 product:5 fr:1 inserting:3 achieve:1 adapts:1 roweis:1 az:1 convergence:3 double:1 optimum:12 incremental:3 converges:1 help:1 depending:2 montreal:1 propagating:1 frean:1 ij:1 finitely:1 qt:17 received:1 c10:2 eq:1 kortanek:3 implemented:1 involves:7 direction:2 downtown:1 stochastic:1 exchange:3 fix:1 generalization:1 really:1 proposition:1 extension:2 strictly:3 hold:1 great:1 major:1 optimizer:1 sought:1 combinatorial:1 tanh:4 largest:1 successfully:1 weighted:13 minimization:10 anecdotal:1 j7:1 rather:2 pn:3 hj:14 cr:2 check:2 hk:2 greedily:1 sense:1 stopping:2 nn:15 i0:1 typically:2 initially:1 hidden:40 relation:1 overall:1 classification:8 dual:1 pascal:1 denoted:2 proposes:1 constrained:1 softmax:1 special:1 equal:1 kw:1 look:1 cancel:1 nearly:1 minimized:1 yoshua:1 np:1 few:2 simultaneously:2 friedman:5 montr:1 possibility:1 analyzed:2 primal:1 partial:1 experience:1 disprove:1 tree:2 continuing:1 penalizes:1 circle:1 re:1 theoretical:1 eal:1 column:6 soft:1 assertion:2 disadvantage:1 ordinary:2 cost:16 entry:1 subset:1 predictor:2 too:1 varies:1 nns:4 thanks:1 siam:1 stay:1 together:1 w1:4 squared:3 again:1 tube:1 possibly:2 derivative:3 return:2 toy:3 de:1 wk:3 vi:7 performed:2 h1:2 lot:1 closed:2 doing:1 reached:7 wm:2 sort:4 contribution:1 minimize:2 square:1 moon:1 ensemble:1 yield:3 conceptually:1 generalize:1 vincent:1 none:1 implausible:1 reach:1 definition:1 involved:3 obvious:3 proof:3 associated:3 stop:3 dataset:1 popular:1 ptrain:1 hilbert:1 carefully:1 actually:2 back:1 attained:1 higher:2 supervised:3 follow:2 specify:1 maximally:1 improved:1 formulation:1 evaluated:1 box:1 done:2 furthermore:1 mitacs:1 stage:1 correlation:1 hand:1 lack:1 incrementally:1 logistic:5 stagewise:3 vj0:2 believe:1 effect:1 requiring:1 multiplier:1 verify:4 true:1 regularization:11 hence:1 q0:5 sin:2 criterion:7 trying:2 theoretic:1 performs:2 l1:6 novel:1 recently:1 umontreal:1 functional:1 insensitive:1 ai:2 rd:2 pm:1 dot:3 moving:1 gj:1 add:2 base:5 optimizing:2 inequality:1 kwk1:1 yi:4 exploited:1 seen:3 minimum:5 argmaxv:1 impose:1 maximize:3 semi:2 branch:1 multiple:2 faster:1 offer:1 cross:3 long:4 e1:1 visit:1 promotes:1 a1:2 prediction:3 involving:2 regression:8 variant:1 iteration:2 represent:1 achieved:5 penalize:1 c1:2 addition:1 residue:1 want:2 justified:1 interval:3 whereas:1 else:1 rest:1 marcotte:5 call:1 feedforward:1 bengio:3 easy:1 enough:2 baxter:1 xj:4 fit:1 zi:3 lasso:1 stall:1 inner:1 idea:2 whether:1 motivated:1 specialization:1 bartlett:1 penalty:8 enumerate:1 generally:1 informally:1 tune:4 amount:1 svms:2 schapire:4 specifies:1 sign:11 yy:1 iz:9 key:1 threshold:1 changing:1 ht:6 v1:1 sum:7 run:1 angle:2 reasonable:2 hettich:3 decision:3 bit:1 layer:8 hi:17 followed:1 courville:1 quadratic:3 yielded:2 occur:1 constraint:8 precisely:1 n3:1 nathan:1 argument:3 min:1 c20:1 extremely:1 chair:1 conjecture:3 according:3 conjugate:2 smaller:3 remain:1 sam:1 wi:4 making:1 restricted:1 taken:1 computationally:1 previously:2 mechanism:1 needed:1 know:1 end:3 pursuit:1 available:1 operation:2 apply:1 v2:1 away:1 uncountable:1 running:1 unnoticed:1 hinge:8 giving:1 k1:1 ghahramani:1 classical:1 objective:3 move:1 added:1 already:2 quantity:2 parametric:3 unproven:1 traditional:1 gradient:15 thank:1 w0:4 discriminant:1 trivial:1 iro:2 induction:1 savard:3 besides:1 minimizing:4 optionally:3 difficult:1 unfortunately:2 negative:1 rise:1 implementation:1 countable:3 neuron:14 finite:12 descent:7 defining:1 hinton:2 y1:2 community:1 canada:2 introduced:1 pair:3 required:3 namely:1 optimized:2 learned:1 able:1 usually:2 below:2 mismatch:1 vincentp:1 program:8 recast:1 max:4 regularized:1 scheme:3 improve:2 hm:3 review:1 determining:1 freund:4 loss:23 embedded:1 interesting:2 generation:5 limitation:1 proven:1 srebro:1 ingredient:1 h2:4 principle:2 editor:1 pi:3 penalized:2 repeat:1 side:1 allow:1 understand:1 perceptron:2 sparse:3 feedback:1 dimension:5 xn:2 stuck:2 author:1 approximate:9 compact:1 obtains:1 global:16 delallea:1 active:3 xi:14 search:2 continuous:1 learn:1 nature:1 nicolas:1 ca:1 obtaining:1 complex:1 vj:8 aistats:1 pk:1 main:1 whole:2 contradicting:1 x1:2 fashion:2 precision:1 sub:5 position:1 exponential:2 bengioy:1 atal:3 theorem:7 xt:31 mason:4 svm:1 concern:1 exists:5 stepwise:2 intractable:1 adding:1 ci:4 entropy:3 smoothly:1 simply:1 likely:1 nserc:1 scalar:4 cowell:1 corresponds:2 satisfies:2 stimulating:1 viewed:1 towards:1 price:1 bennett:2 hard:4 change:4 infinite:5 specifically:2 justify:1 hyperplane:1 lemma:3 total:2 experimental:2 atsch:2 aaron:1 select:5 support:1 inability:1 dept:1 correlated:2 |
1,983 | 2,801 | Rate Distortion Codes in Sensor Networks:
A System-level Analysis
Tatsuto Murayama and Peter Davis
NTT Communication Science Laboratories
Nippon Telegraph and Telephone Corporation
?Keihanna Science City?, Kyoto 619-0237, Japan
{murayama,davis}@cslab.kecl.ntt.co.jp
Abstract
This paper provides a system-level analysis of a scalable distributed sensing model for networked sensors. In our system model, a data center acquires data from a bunch of L sensors which each independently encode
their noisy observations of an original binary sequence, and transmit their
encoded data sequences to the data center at a combined rate R, which
is limited. Supposing that the sensors use independent LDGM rate distortion codes, we show that the system performance can be evaluated for
any given finite R when the number of sensors L goes to infinity. The
analysis shows how the optimal strategy for the distributed sensing problem changes at critical values of the data rate R or the noise level.
1
Introduction
Device and sensor networks are shaping many activities in our society. These networks are
being deployed in a growing number of applications as diverse as agricultural management,
industrial controls, crime watch, and military applications. Indeed, sensor networks can be
considered as a promising technology with a wide range of potential future markets [1].
Still, for all the promise, it is often difficult to integrate the individual components of a sensor network in a smart way. Although we see many breakthroughs in component devices,
advanced software, and power managements, system-level understanding of the emerging
technology is still weak. It requires a shift in our notion of ?what to look for?. It requires
a study of collective behavior and resulting trade-offs. This is the issue that we address in
this article. We demonstrate the usefulness of adopting new approaches by considering the
following scenario.
Consider that a data center is interested in the data sequence, {X(t)} ?
t=1 , which cannot be
observed directly. Therefore, the data center deploys a bunch of L sensors which each independently encodes its noisy observation of the sequence, {Y i (t)}?
t=1 , without sharing any
information, i.e., the sensors are not permitted to communicate and decide what to send to
the data center beforehand. The data center collects separate samples from all the L sensors
and uses them to recover the original sequence. However, since {X(t)}?
t=1 is not the only
pressing matter which the data center must consider, the combined data rate R at which
the sensors can communicate with it is strictly limited. A formulation of decentralized
communication with estimation task, the ?CEO problem?, was first proposed by Berger
and Zhang [2], providing a new theoretical framework for large scale sensing systems. In
this outstanding work, some interesting properties of such systems have been revealed. If
the sensors were permitted to communicate on the basis of their pooled observations, then
they would be able to smooth out their independent observation noises entirely as L goes
to infinity. Therefore, the data center can achieve an arbitrary fidelity D(R), where D(?)
denotes the distortion rate function of {X(t)}. In particular, the data center recovers almost
complete information if R exceeds the entropy rate of {X(t)}. However, if the sensors are
not allowed to communicate with each other, there does not exist a finite value of R for
which even infinitely many sensors can make D arbitrarily small [2].
In this paper, we introduce a new analytical model for a massive sensing system with a
finite data rate R. More specifically, we assume that the sensors use LDGM codes for rate
distortion coding, while the data center recovers the original sequence by using optimal
?majority vote? estimation [3]. We consider the distributed sensing problem of deciding
the optimal number of sensors L given the combined data rate R. Our asymptotic analysis successfully provides the performance of the whole sensing system when L goes to
infinity, where the data rate for an individual sensor information vanishes. Here, we exploit
statistical methods which have recently been developed in the field of disordered statistical
systems, in particular, the spin glass theory. The paper is organized as follows. In Section 2, we introduce a system model for the sensor network. Section 3 summarizes the
results of our approach, where the following section provides the outline of our analysis.
Conclusions are given in the last section.
2
System Model
Let P (x) be a probability distribution common to {X(t)} ? X , and W (y|x) be a stochastic
matrix defined on X ? Y, with Y denotes the common alphabet of {Y i (t)}, where i =
1, ? ? ? , L and t ? 1. In the general setup, we assume that the instantaneous joint probability
distribution in the form
L
Pr[x, y1 , ? ? ? , yL ] = P (x)
W (yi |x)
i=1
{X(t)} ?
t=1 .
Here, the random variables Yi (t)
for the temporally memoryless source
are conditionally independent when X(t) is given, and the conditional probabilities
W [yi (t)|x(t)] are identical for all i and t. In this paper, we impose the binary assumptions to the problem, i.e., the data sequence {X(t)} and its noisy observations {Y i (t)} are
all assumed to be binary sequences. Therefore, the stochastic matrix can be parameterized
as
1 ? p, if y = x
W (y|x) =
,
p,
otherwise
where p ? [0, 1] represents the observation noise. Note also that the alphabets have been
selected as X = Y. Furthermore, for simplicity, we also assume that P (x) = 1/2 always
holds, implying that a purely random source is observed.
At the encoding stage, a sensor i encodes a block y i = [yi (1), ? ? ? , yi (n)]T of length n
T
from the noisy observation {y i (t)}?
t=1 , into a block z i = [zi (1), ? ? ? , zi (m)] of length
m defined on Z. Hereafter, we take the Boolean representation of the binary alphabet
? i be a reproduction sequence for the
X = {0, 1}, therefore Y = Z = {0, 1} as well. Let y
block, and we have a known integer m < n. Then, making use of a Boolean matrix Ai of
dimensionality n?m, we are to find an m bit codeword sequence z i = [zi (1), ? ? ? , zi (m)]T
which satisfies
? i = Ai z i
y
(mod 2) ,
(1)
where the fidelity criterion
1
?i)
dH (y i , y
(2)
n
holds [4]. Here the Hamming distance d H (?, ?) is used for the distortion measure. Note
that we have applied modulo-2 arithmetic for the additive operation in (1). Let A i be
characterized by K ones per row and C per column. The finite, and usually small, numbers
K and C define a particular LDGM code family. The data center then collects the L
codeword sequences, z1 , ? ? ? , zL . Since all the L codewords are of the same length m,
the combined data rate will be R = L ? m/n. Therefore, in our scenario, the data center
? 1, ? ? ? , y
? L . Lastly, the tth
deploys exchangeable sensors with fixed quality reproductions, y
? = [?
symbol of the estimate, x
x(1), ? ? ? , x
?(n)]T , is to be calculated by majority vote [3],
0, if y?1 (t) + ? ? ? + y?L (t) ? L/2
x
?(t) =
.
(3)
1, otherwise
D=
Therefore, overall performance of the system can be measured by the expected bit error
?].
frequency for decisions by the majority vote (3), P e = Pr[x = x
In this paper, we consider two limit cases of decentralization levels; (1) The extreme situation of L ? ?, and (2) the case of L = R. The former case means that the data rate for
an individual sensor information vanishes, while the latter case results in the transmission
without coding techniques. In general, it is difficult to determine which level is optimal for
the estimation, i.e., which scenario results in the smaller value of P e. Indeed, by using the
rate distortion codes, the data center could use as many sensors as possible for a given R.
However, the quality of the individual reproduction would be less informative. The best
choice seems to depend largely on R, as well as p.
3
Main Results
For simplicity, we consider the following two solvable cases; K = 2 for C ? K and
the optimal case of K ? ?. Let p be a given observation noise level, and R the finite
real value of a given combined data rate. Letting L ? ?, we find the expected bit error
frequency to be
?(1?2p)cg ?R
(4)
dr N(0, 1)
Pe(p, R) =
??
with the constant value
?
cg =
?1
?2
?
2
2 ln 2
+
2?
ln 2
?
?
?
?
2
?
?2
?
?
tanh2 x?(x)
(K = 2)
(K ? ?)
(5)
x2 ?? (?x) and the first step RSB enforcement
where the rescaled variance ?2 = ? ?
1
2
1 ?2
tanh2 x (1 + 2x csch x sech x)?(x) = 0
? + ln 2 +
?
2 ?
2
?
holds. Here N(X, Y ) denotes the normal distribution with the mean X and the variance Y .
The rescaled variance ?2 and the scale invariant parameter ? is determined numerically,
where we use the following notations.
?
dx
x2
?
exp ? 2 ( ? ) ,
? ?(x) =
2?
2?? 2
??
+1
d?
x
?)2
(tanh?1 x
2 ?1
?
? ?? (?x) =
(?).
(1 ? x
? ) exp ?
2? 2
2?? 2
?1
Pe
(dB)
(p, R)
(a) Narrow Band
2
R=1
1
0
R=2
?1
R = 10
?2
0
0.1
(dB)
Figure 1: P e
p
0.3
0.4
0.5
0.4
0.5
R = 100
R = 500
R = 1000
Pe
(dB)
(p, R)
(b) Broadband
150
100
50
0
?50
?100
?150
0
0.1
0.2
0.2
p
0.3
(p, R) for K = 2. (a) Narrow band (b) Broadband
Therefore, it is straightforward to evaluate (4) with (5) for given parameters, p and R.
For a given finite value of R, we see what happens to the quality of the estimate when the
noise level p varies. Fig. 1 and Fig. 2 shows the typical behavior of the bit error frequency,
Pe(p, R), in decibel (dB), where the reference level is chosen as
(R?1)/2
R
(R is odd)
(1 ? p)l pR?l ,
(0)
l
(6)
Pe (p, R) =
l=0
R/2?1 R
1 R
l R?l
R/2 R/2
+ 2 R/2 (1 ? p) p
(R is even)
(1 ? p) p
l=0
l
for a given integer R. The reference (6) denotes Pe for the case of L = R, i.e., the case
when the sensors are not allowed to compress their observations. Here, in decibel, we have
Pe (p, R)
Pe(dB) (p, R) = 10 log (0)
,
Pe (p, R)
where the log is to base 10. Note that the zero level in decibel occurs when the measured
error frequency Pe (p, R) is equal to the reference level. Therefore, it is also possible to
have negative levels, which would mean an expected bit error frequency much smaller than
the reference level. In the case of small combined data rate R, the narrow band case, the
numerical results in Fig. 1 (a) and Fig. 2 (a) show that the quality of the estimate is sensitive
to the parity of the integer R. In particular, the R = 2 case has the lowest threshold level,
pc = 0.0921 for Fig. 1 (a) and pc = 0.082 for Fig. 2 (a) respectively, beyond which
the L ? ? scenario outperforms the L = R scenario, while the R = 1 case does not
have such a threshold. In contrast, if the bandwidth is wide enough, the difference of the
(dB)
expected bit error probabilities in decibel, P e (p, R), is proved to have similar qualitative
characteristics as shown in Fig. 1 (b) and Fig. 2 (b). Moreover, our preliminary experiments
for larger systems also indicate that the threshold p c seems to converge to the value, 0.165
and 0.146 respectively, as L goes to infinity; we are currently working on the theoretical
derivation.
4
Outline of Derivation
Since the predetermined matrices A1 , ? ? ? , AL are selected randomly, it is quite natural to
? (t) = [?
say that the instantaneous series, defined by y
y1 (t), ? ? ? , y?L(t)]T , can be modeled
(a) Narrow Band
Pe
(dB)
(p, R)
2
R=1
1
0
R=2
?1
R = 10
?2
0
0.1
Pe
(dB)
(p, R)
(b) Broadband
150
100
50
0
?50
?100
?150
0
0.1
(dB)
Figure 2: P e
0.2
p
0.3
0.4
0.5
0.4
0.5
R = 100
R = 500
R = 1000
0.2
p
0.3
(p, R) for K ? ?. (a) Narrow band (b) Broadband
using the Bernoulli trials. Here, the reproduction problem reduces to a channel model,
where the stochastic matrix is defined as
q,
if y? = x
W (?
y |x) =
,
(7)
1 ? q, otherwise
where q denotes the quality of the reproductions, i.e., Pr[x = y?i ] = 1 ? q for i = 1, ? ? ? , L.
Letting the channel model (7) for the reproduction problem be valid, the expected bit error
frequency can be well captured by using the cumulative probability distributions
L?1
if L is odd
B( 2 : L, q),
Pe = Pr[x = x?] =
(8)
B( L2 ? 1 : L, q) + 12 b( L2 : L, q) otherwise
with
B(L : L, q) =
L
l=0
b(l : L, q) ,
L l
b(l : L, q) =
q (1 ? q)L?l ,
l
? (t), and the second term
where an integer l be the total number of non-flipped elements in y
(1/2)b(L/2 : L, q) represents random guessing with l = L/2. Note that the reproduction
quality q can be easily obtained by the simple algebra q = pD + (1 ? p)(1 ? D), where D
is the distortion with respect to coding.
Since the error probability (8) is given by a function of q, we firstly derive an analytical
solution for the quality q in the limit L ? ?, keeping R finite. In this approach, we apply
the method of statistical mechanics to evaluate the typical performance of the codes [4].
As a first step, we translate the Boolean alphabets Z = {0, 1} to the ?Ising? ones, S =
{+1, ?1}. Consequently, we need to translate the additive operations, such as, z i (s) +
zi (s ) (mod 2) into their multiplicative representations, ? i (s) ? ?i (s ) ? S for s, s =
1, ? ? ? , m. Similarly, we translate the Boolean yi (t)s into the Ising J i (t)s. For simplicity,
we omit the subscript i, which labels the L agents, in the rest of this section. Following the
prescription of Sourlas [5], we examine the Gibbs-Boltzmann distribution
exp [??H(?|J)]
with Z(J ) =
Pr[?] =
e??H(?|J) ,
(9)
Z(J )
?
where the Hamiltonian of the Ising system is defined as
As1 ...sK Ji [t(s1 , . . . , sK )]?(s1 ) . . . ?(sK ) .
H(?|J) = ?
(10)
s1 <???<sK
The observation index t(s1 , . . . , sK ) specifies the proper value of t given the set
s1 , . . . , sK , so that it corresponds to the parity check equation (1). Here the elements
of the symmetric tensor As1 ...sK , representing dilution, is either zero or one depending
on the set of indices (s1 , . . . , sK
). Since there are C non-zero elements randomly chosen
for any given index s, we find s2 ,...,sK Ass2 ...sK = C . The code rate is R/L = K/C
because a reproduction sequence has C bits per index s and carries K bits of the codeword. It is easy to see that the Hamiltonian (10) is counting the reproduction errors,
[1 ? Jt(s1,...,sK ) ? ?(s1 ) . . . ?(sK )]/2.
Moreover, according to the statistical mechanics, we can easily derive the ?observable?
quantities using the free energy defined as
f =?
1
ln Z(J )A,J
?
which carries all information about the statistics of the system. Here, ? denotes an ?inverse
temperature? for the Gibbs-Boltzmann distribution (9), and ? A,J represents the configurational average. Therefore, we have to average the logarithm of the partition function Z(J )
over the given distribution ? A,J after the calculation of the partition function. Finally,
to perform such a program, the replica trick is used [6]. The theory of replica symmetry
breaking can provide the free energy resulting in the expression
1
ln cosh ? ? K ln [1 + tanh(?x) tanh(? x? )]?(x),??(?x)
f =?
?n
K
1
+
ln 1 + tanh(?J)
tanh(?xl )
2
J=?1
l=1
?(x)
C
C
,
(11)
ln
+
[1 + ? tanh(? x
?l )]
K
?=?1
l=1
?
? (?
x)
where ??(x) denotes the averaging over p(x l )s and so on. The variation of (11) by ?(x)
and ?? (?
x) under the condition of normalization gives the saddle point condition
C?1
1
?(x) = ? x ?
x
?l
, ?
? (?
x) =
? [?
x ? ?(x1 , . . . , xK?1 ; J)]
,
2
l=1
where
?
? (?
x)
J=?1
?(x)
K?1
1
?1
tanh(?J)
?(x1 , . . . , xK?1 ; J) = tanh
tanh(?xl ) .
?
l=1
We now investigate the case of K = 2. Applying the central limit theorem to ?(x) [7], we
get
?(x) = ?
1
2?C? 2
x2
e? 2C?2 ,
(12)
where ?2 is the variance of ?
? (?
x). Here the resulting distribution (12) is a even function.
The leading contribution to ? is then given by ?(x; J) ? J ? tanh(?x) as ? goes to zero;
The expression is valid in the asymptotic region L 1 for a fixed R. Then, the formula
for the delta function yields [8]
?1
1
1
?1
?1
tanh x
?
? (?
x) = ? x ? tanh x
? ?
?; x
?
?
?
?(x)
(13)
?)2
(1 ? x
?2 )?1
(tanh?1 x
,
=
exp ?
2? 2 C? 2
2?? 2 C? 2
where we have used ?(x; x?) = x? ? tanh(?x). Therefore, we have
+1
d?
x
x
?2
?)2
(tanh?1 x
x2 ?? (?x) =
exp
?
? 2 = ?
?2
2? 2 C? 2
2?? 2 C? 2 1 ? x
?1
for given ? 2 C. Inserting (12), (13) into (11), we get
f =?
?
R
1 ? 2?2
? ?? (?x)
? ln 2 +
? tanh2 x
2
?
2
2
1
? x?
with ?
? (?
x) =
e 2?2 C?2 ,
2?? 2 C? 2
where we rewrite x
? = ?x. The theory of replica symmetry breaking tells us that relevant
value of ? should not be smaller than the ?freezing point? ? g , which implies the vanishing
entropy condition:
1
2
1 ? 2? 2
?f
= ? + 2 ln 2 +
tanh2 x
? (1 + 2?
x csch x
? sech x
?) ?? (?x) = 0 .
??
2 ?g C
2
Accordingly, it is convenient for us to define a scaling invariant parameter ? = ? g2 C, and to
rewrite the variance ?
? 2 = ?? 2 for simplicity. Introducing these newly defined parameters,
the above results could be summarized as follows. Given R and L, we find
R
2
? 1 ?
1 ?
?2
?
? ln 2
+
?
tanh2 x
??? (?x)
f=
L
2 2
?
2 2
?
x2 ?? (?x) , where the condition
with ?
? 2 = ? ?
1
2
?2
1 ?
tanh2 x
? (1 + 2?
x csch x
? sech x
?) ?? (?x) = 0
? + ln 2 +
?
2 ?
2
?
holds. Here we denote
? ?? (?x) =
?
??
+1
? ?? (?x) =
?1
(14)
x
?2
exp ? 2 ( ? ) ,
2?
?
2? ?? 2
d?
x
?)2
(tanh?1 x
?
(?).
(1 ? x
?2 )?1 exp ?
2?
?2
2? ?? 2
?
d?
x
Lastly, by using the cumulative probability distribution, we get
L/2
L/2
L l
Pe =
q (1 ? q)L?l ?
dr N(Lq, Lq(1 ? q)) .
l
0
(15)
l=0
It is easy to see that (15) can
normal distribution by changing
be converted to a standard
r = dr/ Lq(1 ? q), yielding
variables to r? = (r ? Lq)/ Lq(1 ? q) [7], so d?
r?g
Pe ? ? d?
r N(0, 1)
? L
with
?
1
r?g = 2 L(1 ? 2p) D ?
2
R
1?
2 ln 2 ?
1 ?? 2
(1 ? 2p) ?
?
tanh2 x
?? ? + ?
??? (?x) .
=
2
2
?
2
?
Note that the relation D = (1 + f)/2 holds at the vanishing entropy condition (14) [4].
Finally, we obtain the main result (4) in Section 3 in the limit L ? ?, when we use proper
notations for the variables and the name of the function.
We can investigate the asymptotic case of K ? ? in a similar way.
? Since the leading
contribution to ?
? (?
x
)
comes
from
the
value
of
x
in
the
vicinity
of
C? 2 , we find the ex
K
by using the power counting. Therefore, within
pression ?
? (?
x) ? ? x
? ? y? K (C? 2 ) 2
the Parisi RSB scheme, one obtain a set of equations
?
?
?c
1
R
R
Lf = ?
ln 2 = 0
? ? ln 2 , ? +
2
?c
2 ?c
?
with the scale-invariant ? c = ? 2 L. This results in cg = 2 ln 2, as is mentioned before.
5
Conclusion
This paper provides a system-level perspective for massive sensor networks. The decentralized sensing problem argued in this paper was first addressed by Berger and his collaborators. However, this paper is the first work that gives a scheme to analyze practically
tractable codes in the given finite data rate, and shows the existence of threshold level of
noise of which the optimal levels of decentralization changes. Future work includes the
theoretical derivation of the threshold level p c where R goes to infinity, as well as the
implementation problem.
Acknowledgments
The authors thank Jun Muramatsu and Naonori Ueda for useful discussions. This work was
supported by the Ministry of Education, Science, Sports and Culture (MEXT) of Japan,
under the Grant-in-Aid for Young Scientists (B), 15760288.
References
[1] (2005) Intel@Mote. [Online]. Available: http://www.intel.com/research/exploratory/motes.htm
[2] T. Berger, Z. Zhang, and H. Viswanathan, ?The CEO problem,? IEEE Trans. Inform. Theory,
vol. 42, pp. 887?902, May 1996.
[3] D. J. C. MacKay, Information Theory, Inference and Learning Algorithms. Cambridge, UK:
Cambridge University Press, 2003.
[4] T. Murayama and M. Okada, ?Rate distortion function in the spin glass state: a toy model,? in
Advances in Neural Information Processing Systems 15 (NIPS?02), Denver, USA, Dec. 2002, pp.
423?430.
[5] N. Sourlas, ?Spin-glass models as error-correcting codes,? Nature, vol. 339, pp. 693?695, June
1989.
[6] V. Dotsenko, Introduction to the Replica Theory of Disordered Statistical Systems. Cambridge,
UK: Cambridge University Press, 2001.
[7] W. Hays, Statistics (5th Edition). Belmont, CA: Wadsworth Publishing, 1994.
[8] C. W. Wong, Introduction to Mathematical Physics: Methods and Concepts. Oxford, UK:
Oxford University Press, 1991.
| 2801 |@word trial:1 seems:2 carry:2 series:1 hereafter:1 outperforms:1 com:1 dx:1 must:1 belmont:1 numerical:1 additive:2 informative:1 partition:2 predetermined:1 implying:1 selected:2 device:2 accordingly:1 xk:2 hamiltonian:2 vanishing:2 provides:4 firstly:1 zhang:2 mathematical:1 qualitative:1 introduce:2 market:1 expected:5 indeed:2 behavior:2 examine:1 growing:1 mechanic:2 considering:1 agricultural:1 notation:2 moreover:2 lowest:1 what:3 emerging:1 developed:1 corporation:1 uk:3 control:1 zl:1 exchangeable:1 omit:1 grant:1 before:1 scientist:1 limit:4 encoding:1 oxford:2 subscript:1 collect:2 co:1 limited:2 range:1 acknowledgment:1 block:3 lf:1 convenient:1 get:3 cannot:1 applying:1 wong:1 www:1 center:13 send:1 go:6 straightforward:1 independently:2 simplicity:4 correcting:1 his:1 notion:1 exploratory:1 variation:1 transmit:1 modulo:1 massive:2 nippon:1 us:1 trick:1 element:3 ising:3 observed:2 region:1 trade:1 rescaled:2 mentioned:1 vanishes:2 pd:1 deploys:2 depend:1 rewrite:2 smart:1 algebra:1 decentralization:2 purely:1 basis:1 easily:2 joint:1 htm:1 tanh2:7 alphabet:4 derivation:3 tell:1 quite:1 encoded:1 larger:1 distortion:8 say:1 otherwise:4 statistic:2 noisy:4 online:1 sequence:12 pressing:1 parisi:1 analytical:2 inserting:1 networked:1 sourlas:2 relevant:1 murayama:3 translate:3 achieve:1 transmission:1 derive:2 depending:1 measured:2 odd:2 indicate:1 implies:1 come:1 stochastic:3 disordered:2 education:1 argued:1 preliminary:1 collaborator:1 strictly:1 hold:5 practically:1 considered:1 normal:2 deciding:1 exp:7 estimation:3 label:1 tanh:16 currently:1 sensitive:1 city:1 successfully:1 offs:1 sensor:25 always:1 encode:1 june:1 bernoulli:1 check:1 industrial:1 contrast:1 cg:3 glass:3 inference:1 relation:1 interested:1 issue:1 fidelity:2 overall:1 breakthrough:1 mackay:1 wadsworth:1 field:1 equal:1 identical:1 represents:3 flipped:1 look:1 future:2 dilution:1 randomly:2 individual:4 pression:1 investigate:2 dotsenko:1 extreme:1 yielding:1 pc:2 beforehand:1 naonori:1 culture:1 logarithm:1 theoretical:3 military:1 column:1 boolean:4 introducing:1 usefulness:1 varies:1 combined:6 yl:1 telegraph:1 physic:1 central:1 management:2 dr:3 leading:2 toy:1 japan:2 potential:1 converted:1 pooled:1 coding:3 summarized:1 includes:1 matter:1 multiplicative:1 analyze:1 recover:1 contribution:2 spin:3 variance:5 largely:1 characteristic:1 yield:1 weak:1 bunch:2 inform:1 sharing:1 energy:2 frequency:6 pp:3 recovers:2 hamming:1 newly:1 proved:1 dimensionality:1 organized:1 shaping:1 permitted:2 formulation:1 evaluated:1 furthermore:1 stage:1 lastly:2 working:1 freezing:1 quality:7 name:1 usa:1 concept:1 former:1 vicinity:1 memoryless:1 laboratory:1 symmetric:1 conditionally:1 davis:2 acquires:1 criterion:1 outline:2 complete:1 demonstrate:1 temperature:1 instantaneous:2 recently:1 common:2 ji:1 denver:1 jp:1 muramatsu:1 numerically:1 cambridge:4 gibbs:2 ai:2 similarly:1 sech:3 base:1 perspective:1 scenario:5 codeword:3 hay:1 binary:4 arbitrarily:1 yi:6 captured:1 ministry:1 impose:1 determine:1 converge:1 ldgm:3 arithmetic:1 kyoto:1 reduces:1 ntt:2 smooth:1 exceeds:1 characterized:1 calculation:1 prescription:1 a1:1 scalable:1 normalization:1 adopting:1 dec:1 addressed:1 source:2 rest:1 configurational:1 supposing:1 db:9 mod:2 integer:4 counting:2 revealed:1 keihanna:1 enough:1 easy:2 zi:5 bandwidth:1 shift:1 expression:2 peter:1 useful:1 band:5 cosh:1 tth:1 http:1 specifies:1 exist:1 delta:1 per:3 diverse:1 promise:1 vol:2 threshold:5 changing:1 replica:4 inverse:1 parameterized:1 communicate:4 almost:1 family:1 decide:1 ueda:1 decision:1 summarizes:1 scaling:1 bit:9 entirely:1 activity:1 infinity:5 x2:5 software:1 encodes:2 mote:2 cslab:1 according:1 viswanathan:1 smaller:3 making:1 happens:1 s1:8 invariant:3 pr:6 ln:16 equation:2 enforcement:1 letting:2 tractable:1 available:1 operation:2 decentralized:2 apply:1 existence:1 original:3 compress:1 denotes:7 ceo:2 publishing:1 exploit:1 society:1 tensor:1 quantity:1 occurs:1 codewords:1 strategy:1 guessing:1 distance:1 separate:1 thank:1 majority:3 code:9 length:3 modeled:1 berger:3 index:4 providing:1 difficult:2 setup:1 negative:1 implementation:1 collective:1 boltzmann:2 proper:2 perform:1 observation:10 finite:8 situation:1 communication:2 y1:2 arbitrary:1 z1:1 crime:1 narrow:5 nip:1 trans:1 address:1 able:1 beyond:1 usually:1 program:1 rsb:2 power:2 critical:1 natural:1 solvable:1 advanced:1 representing:1 scheme:2 technology:2 temporally:1 jun:1 understanding:1 l2:2 asymptotic:3 interesting:1 integrate:1 agent:1 article:1 kecl:1 row:1 supported:1 last:1 parity:2 keeping:1 free:2 wide:2 distributed:3 calculated:1 valid:2 cumulative:2 author:1 observable:1 assumed:1 sk:12 as1:2 promising:1 channel:2 okada:1 nature:1 ca:1 symmetry:2 main:2 whole:1 noise:6 s2:1 edition:1 allowed:2 x1:2 fig:8 broadband:4 intel:2 deployed:1 aid:1 lq:5 xl:2 pe:15 breaking:2 young:1 theorem:1 formula:1 decibel:4 jt:1 sensing:7 symbol:1 reproduction:9 entropy:3 saddle:1 infinitely:1 g2:1 sport:1 watch:1 corresponds:1 satisfies:1 dh:1 conditional:1 consequently:1 change:2 telephone:1 specifically:1 determined:1 typical:2 averaging:1 total:1 vote:3 mext:1 latter:1 outstanding:1 evaluate:2 ex:1 |
1,984 | 2,802 | Optimal cue selection strategy
Vidhya Navalpakkam
Department of Computer Science
USC, Los Angeles
[email protected]
Laurent Itti
Department of Computer Science
USC, Los Angeles
[email protected]
Abstract
Survival in the natural world demands the selection of relevant visual
cues to rapidly and reliably guide attention towards prey and predators
in cluttered environments. We investigate whether our visual system selects cues that guide search in an optimal manner. We formally obtain
the optimal cue selection strategy by maximizing the signal to noise ratio (SN R) between a search target and surrounding distractors. This
optimal strategy successfully accounts for several phenomena in visual
search behavior, including the effect of target-distractor discriminability,
uncertainty in target?s features, distractor heterogeneity, and linear separability. Furthermore, the theory generates a new prediction, which we
verify through psychophysical experiments with human subjects. Our results provide direct experimental evidence that humans select visual cues
so as to maximize SN R between the targets and surrounding clutter.
1 Introduction
Detecting a yellow tiger among distracting foliage in different shades of yellow and brown
requires efficient top-down strategies that select relevant visual cues to enable rapid and
reliable detection of the target among several distractors. For simple scenarios such as
searching for a red target, the Guided Search theory [17] predicts that search efficiency can
be improved by boosting the red feature in a top-down manner. But for more complex and
natural scenarios such as detecting a tiger in the jungle or looking for a face in a crowd,
finding the optimum amount of top-down enhancement to be applied to each low-level feature dimension encoded by the early visual system is non-trivial. It must not only consider
features present in the target, but also those present in the distractors. In this paper, we formally obtain the optimal cue selection strategy and investigate whether our visual system
has evolved to deploy it. In section 2, we formulate cue selection as an optimization problem where the relevant goal is to maximize the signal to noise ratio (SN R) of the saliency
map, so that the target becomes most salient and quickly draws attention, thereby minimizing search time. Next, we show through simulations that this optimal top-down guided
search theory successfully accounts for several observed phenomena in visual search behavior, such as the effect of target-distractor discriminability, uncertainty in target?s features, distractor heterogeneity, linear separability, and more. In section 4, we describe the
design and analysis of psychophysics experiments to test new, counter-intuitive predictions
of the theory. The results of our study suggest that humans deploy optimal cue selection
strategies to detect targets in cluttered and distracting environments.
2 Formalizing visual search as an optimization problem
To quickly find a target among distractors, we wish to maximize the salience of the target
relative to the distractors. Thus we can define the signal to noise ratio (SN R) as the ratio of
salience of the target to the distractors. Assuming that visual cues or features are encoded
by populations of neurons in early visual areas, we define the optimal cue selection strategy
as the best choice of neural response gain that maximizes the signal to noise ratio (SN R).
In the rest of this section, we formally obtain the optimal choice of gain in neural responses
that will maximize SN R.
SN R in a visual search paradigm: In a typical visual search paradigm, the salience of the
target and distractors is a random variable that depends on their location in the search array,
their features, the spatial configuration of target and distractors, and that varies between
identical repeated trials due to internal noise in neural response to the visual input. Hence,
we express SN R as the ratio of expected salience of the target over expected salience of
the distractors, with the expectation taken over all possible target and distractor locations,
their features and spatial configurations, and over several repeated trials.
Mean salience of the Target
SN R = Mean salience of the distractor
Search array and its stimuli: Let search array A be a two-dimensional display that consists of one target T and several distractors Dj (j = 1...N 2 -1). Let the display be divided
into an invisible N ? N grid, with one item occuring at each cell (x, y) in the grid. Let
the color, contrast, orientation and other target parameters ?T be chosen from a distribution
P (?|T ). Similarly, for each distractor Dj , let its parameters ?Dj be sampled independently
from a distribution P (?|D). Thus, search array A has a fixed choice of target and distractor
parameters. Next, the spatial configuration C is decided by a random permutation of some
assignment of the target and distractors to the N 2 cells in A (such that there is exactly one
item in each cell). Thus, for a given search array A, the spatial configuration as well as
stimulus parameters are fixed. Finally, given a choice of parameter ? and its spatial location (x, y), we generate an image pattern R(?) (a set of pixels and their values) and embed
it at location (x, y) in search array A. Thus, we generate search array A.
Saliency computation: Let the input search array A be processed by a population
of neurons with gaussian tuning curves tuned to different stimulus parameters such as
?1 , ?2 , ...?n . The output of this early visual processing stage is used to compute saliency
maps si (x, y, A) of search array A, that consist of the visual salience at every location (x, y)
for feature-values ?i (i = 1...n). Let si (x, y, A) be combined linearly to form S(x, y, A),
the overall salience at location (x, y). Further, assuming a multiplicative gain gi on the ith
saliency map, we obtain:
X
S(x, y, A) =
gi si (x, y, A)
(1)
i
Salience of the target and distractors: Let ST (A) be a random variable representing the
salience of the target T in search array A. To factor out the variability due to internal
noise ?, we consider E? [ST (A)], which is the mean salience of the target over repeated
identical presentations of A. Further, let EC [ST (A)] be the mean salience of the target
averaged over all spatial configurations of a given set of target and distractor parameters.
Similarly, E?|T [ST (A)] is the mean salience of the target over all target parameters. The
mean salience of the target combined over several repeated presentations of the search array
A (to factor out internal noise ?), over all spatial configurations C, and over all choices of
target parameters ?|T is given below. Further, since ?, C and ? are independent random
variables, we can rewrite the joint expectation as follows:
E[ST (A)] = E?|T [EC [E? [ST (A)]]]
(2)
Let SD (A) represent the mean salience of distractors Dj (j = 1...N 2 -1) in search array
A. Similar to computing the mean salience of the target, we find the mean salience of
distractors over all ?, C and ?|D.
SD (A) = EDj [siDj (A)]
(3)
E[SD (A)] = E?|D [EC [E? [SD (A)]]]
(4)
SN R and its optimization: The additive salience and multiplicative gain hypothesis in
eqn. 1 yields the following:
n
X
E[ST (A)] =
gi E?|T [EC [E? [siT (A)]]]
(5)
i=1
E[SD (A)]
n
X
=
gi E?|T [EC [E? [siT (A)]]] (similarly)
(6)
i=1
SN R can be expressed in terms of salience as:
Pn
gi E?|T [EC [E? [siT (A)]]]
SN R = Pni=1
(7)
i=1 gi E?|D [EC [E? [siD (A)]]]
We wish to find the optimal choice of gi that maximises SN R. Hence, we differentiate
SN R wrt gi to get the following:
Pn
gj E?|T [EC [E? [sjT (A)]]]
E?|T [EC [E? [siT (A)]]]
j=1
P
n
E?|D [EC [E? [siD (A)]]] ?
gj E?|D [EC [E? [sjD (A)]]]
?
j=1
Pn
(8)
SN R =
gj E?|D [EC [E? [sjD (A)]]]
?gi
j=1
E?|D [EC [E? [siD (A)]]]
=
SN Ri
SN R
?1
(9)
?i
where ?i is a normalization term and SN Ri is the signal-to-noise ratio of the ith saliency
map.
SN Ri = E?|T [EC [E? [siT (A)]]]/E?|D [EC [E? [siD (A)]]]
(10)
d
The sign of the derivative, dgi SN R
tells us whether gi should be increased, degi =1
creased or maintained at the baseline activation 1 in order to maximize SN R.
SN Ri
SN R
<
=
>
d
SN R < 0 ? SN R increases as gi decreases ? gi < 1
dgi
d
1?
SN R = 0 ? SN R does not change with gi ? gi = 1
dgi
d
1?
SN R > 0 ? SN R increases as gi increases ? gi > 1
dgi
1?
(11)
(12)
(13)
Ri
Thus, we obtain an intuitive result that gi increases as SN
SN R increases. We simplify this
monotonic relationship assuming proportionality. Further, if we impose a restriction that
the gains cannot be increased indiscriminately, but must sum to some constant, say the total
number of saliency maps (n), we have the following:
SN Ri
let gi ?
(14)
SN R
X
SN Ri
gi = n ? gi = P
(15)
if
i
i
SN Ri
n
Thus the gain of a saliency map tuned to a band of feature-values depends on the strength
of the signal-to-noise ratio in that band compared to the mean signal-to-noise ratio in all
bands in that feature dimension.
3 Predictions of the optimal cue selection strategy
To understand the implications of biasing features according to the optimal cue selection
strategy, we simulate a simple model of early visual cortex. We assume that each feature
dimension is encoded by a population of neurons with overlapping gaussian tuning curves
that are broadly tuned to different features in that dimension. Let fi (?) represent the tuning
curve of the ith neuron in a population of broadly tuned neurons with overlapping tuning
curves. Let the tuning width ? and amplitude a be equal for all neurons, and ?i represent
the preferred stimulus parameter (or feature) of the ith neuron.
a
(? ? ?i )2
fi (?) = exp ?
(16)
?
2? 2
Let ~r(?(x, y, A)) = {r1 (?(x, y, A))...rn (?(x, y, A))} be the population response to a
stimulus parameter ?(x, y, A) at a location (x, y) in search array A, where ri refers to the
response of the ith neuron and n is the total number of neurons in the population. Let the
neural response ri (?(x, y, A)) be a Poisson random variable.
P (ri (?(x, y, A)) = z) = Pfi (?(x,y,A)) (z)
(17)
For simplicity, let?s assume that the local neural response ri (?(x, y, A)) is a measure of
salience si (x, y, A). Using eqns. 2, 4, 10, 16, 17, we can derive the mean salience of the
target and distractor, and use it to compute SN Ri .
si (x, y, A)
E[siT (A)]
= ri (?(x, y, A))
= E?|T [fi (?)]
(18)
(19)
E[siD (A)]
= E?|D [fi (?)]
(20)
E?|T [fi (?)]
E?|D [fi (?)]
(21)
SN Ri
=
Finally, the gains gi on each saliency map can be found using eqn. 15. Thus, for a given
distribution of stimulus parameters for the target P (?|T ) and distractors P (?|D), we simulate the above model of early visual cortex, compute salience of target and distractors,
compute SN Ri and obtain gi . In the rest of this section, we plot the distribution of optimal choice of gains gi for an exhaustive list of conditions where knowledge of the target
and distractors varies from complete certainty to uncertainty.
Unknown target and distractors: In the trivial case where there is no knowledge of the
target and distractors, all cues are equally relevant and the optimal choice of gains is the
same as baseline activation (unity). SN R is minimum leading to a slow search. This
prediction is consistent with visual search experiments that observe slow search when the
target and distractors are unknown due to reversal between trials [1, 2].
Search for a known target: During search for a known target, the optimal strategy predicts
that SN R can be maximised by boosting neurons according to how strongly they respond
to the target feature (as shown in figure 1, predicted SN R is 12.2 dB). Thus, a neuron that
is optimally tuned to the target feature receives maximal gain. This prediction is consistent
with single unit recordings on feature-based attention which show that the gain in neural
response depends on the similarity between the neuron?s preferred feature and the target
feature [3, 4].
Role of uncertainty in target features: When there?s uncertainty in the target?s features,
i.e., when the target?s parameter assumes multiple values according to some probability
distribution P (?|T ), the optimal strategy predicts that SN R decreases, leading to a slower
search (as shown in figure 1, SN R decreases from 12.2 dB to 9 dB ). This result is consistent with psychophysics experiments which suggest that better knowledge of the target
leads to faster search [5, 6].
Distractor heterogeneity: While searching for an unknown target among known distractors, the optimal strategy predicts that SN R can be maximised by suppressing the neurons
tuned to the distractors (see figure 1). But as we increase distractor heterogeneity or the
number of distractor types, it predicts a decrease in SN R (from 36 dB to 17 dB, figure 1).
This result is consistent with experimental data [10].
Discriminability between target and distractors: Several experiments and theories have
studied the effect of target-distractor discriminability [10]-[17]. The optimal cue selection
strategy also shows that if the target and distractors are very different or highly discriminable, SN R is high and the search is efficient (SN R = 51.4 dB, see figure 1). Otherwise,
if they are similar and not well separated in feature space, SN R is low and the search is
hard (SN R = 16.3 dB, see figure 1). Moreover, during search for a less discriminable
target from distractors, the optimal strategy predicts that the neuron optimally tuned to the
target may not be boosted maximally. Instead, a neuron that is sub-optimally tuned to the
target and farther away from the distractors receives maximal gain. This new and counterintuitive prediction is tested by visual search experiments described in the next section.
Linear separability effect: The optimal strategy also predicts the linear separability effect
[18, 19] which suggests that when the target and distractors are less discriminable, search
is easier if the target and distractors can be separated by a line in feature space (see figure
1). This effect has been demonstrated in size (e.g., search for the smallest or largest item is
faster than search for a medium-sized item in the display)[20], chromaticity and luminance
[21, 19], and orientation [22, 23].
4 Testing new predictions of the optimal cue selection strategy
In this section, we describe the design and analysis of psychophysics experiments to verify
the counter-intuitive prediction mentioned in the previous section, i.e., during searching for
a target that is less discriminable from the distractors, a neuron that is sub-optimally tuned
to the target?s feature will be boosted more than a neuron that is optimally tuned to the
target?s feature.
4.1 Design of psychophysics experiments
Our experiments are designed in two phases: phase 1 to set up the top-down bias and phase
2 to measure the bias.
Phase 1 - Setup the top-down bias: Subjects perform the primary task T1 which is a
visual search for the target among distractors. This task sets the top-down bias on cues
so that the target becomes the most salient item in the display, thus accelerating target
detection. Subjects are trained on T1 trials until their performance stabilises with at least
80% accuracy. They are instructed to find the target (55? tilt) among several distractors
(50? tilt). The target and distractors are the same for all T1 trials. To avoid false reports
(which may occur due to boredom or lack of attention) and to verify that subjects indeed
find the target, we introduce a novel no cheat scheme as follows: After finding the target
among distractors, subjects press any key. Following the key press, we flash a grid of
fineprint random numbers briefly (120ms) and ask subjects to report the number at the
target?s location. Online feedback on accuracy of report is provided. Thus, the top-down
bias is set up by performing T1 trials.
Parameter
Mean response to T and D
Optimal response gain
Response gain
Mean firing rate
a)
Probability
P( | T) and P( | D)
Neuron's preferred
Neuron's preferred
b)
c)
d)
e)
f)
g)
h)
Figure 1: a) Search for a known target ? left: Prior knowledge P (?|T ) has a peak at the known
target feature and P (?|D) is flat as the distractor is unknown, middle: The expected responses of
a population of neurons to the target is highest for neurons tuned around the target?s ? while the
expected response to the distractors is flat, right: The optimal response gain in this situation is to
boost the gain of the neurons that are tuned around the target?s ?; b) Search for an uncertain target;
c) Unknown target among a known distractor; d) Presence of heterogeneous distractors; e) High
discriminability between target and distractors; f) Low discriminability; g) Search for an extreme
feature (linearly separable) among others; h) Search for a mid feature (nonlinearly separable) among
others.
Subject 1
Number of reports
P
Subject 2
7
Subject 4
Subject 3
12
9
8
8
7
6
10
*
5
4
*
*
5
3
5
6
*
2
1
0
Cues presented
*
4
*
4
*
0
3
*
2
1
0
*
6
*
8
6
4
3
2
7
2
0
1
80
o
2
60
o
1
*
*
1
2
55
o
3
4
50
2
3
4
o
Figure 2: The results of the T2 trials described in section 4.1 (phase 2) are shown here. For each
of the four subjects, the number of reports on the steepest (80? ), relevant (60? ), target (55? ) and
distractor (50? ) cues are shown in these bar plots. As predicted by the theory, a paired t-test reveals
that the number of reports on the relevant cue is significantly higher (p < 0.05) than the number of
reports on the target, distractor and steepest cues, as indicated by the blue star.
Phase 2 - Measure the top-down bias: To measure the top-down bias generated by the
above task, we randomly insert T2 trials in between T1 trials. Our theory predicts that
during search for the target (55?) among distractors (50? ), the most relevant cue will be
around 60? and not 55? . To test this, we briefly (200ms) flash four cues - steepest (S,
80? ), relevant as predicted by our theory (R, 60? ), target (T, 55? ) and distractor (D, 50? ).
A cue that is biased more appears more salient, attracts a saccade, and gets reported. In
other words, the greater the top-down bias on a cue, the higher the number of its reports.
According to our theory, there should be higher number of reports on R than T.
Experimental details: We ran 4 na??ve subjects. All were aged 22-30, had normal or
corrected vision, volunteered or participated for course credit. As mentioned earlier, each
subject received training on T1 trials for a few days until the performance (search speed)
stabilised with atleast 80% accuracy. To become familiar with the secondary task, they
were trained on 50 T2 trials. Finally, each subject performed 10 blocks of 50 trials each,
with T2 trials randomly inserted in between T1 trials.
4.2 Results
For each of the four subjects, we extracted the number reports on the steepest (NS ), relevant
(NR ), target (NT ) and distractor (ND ) cues, for each block. We used a paired t test to check
for statistically significant differences between NR and NT , ND , NS . Results are shown
in figure 2. As predicted by the theory, we found a significantly higher number of reports
on the relevant cue than the target cue.
5 Discussion
In this paper, we have investigated whether our visual system has evolved to use optimal top-down strategies to select relevant cues that quickly and reliably detect the target
among distracting environments. We formally obtained the optimal cue selection strategy
where cues are chosen such that the signal-to-noise ratio (SN R) of the saliency map is
maximized, thus maximizing the target?s salience relative to the distractors. The resulting optimal strategy is to boost a cue or feature if it provides higher signal-to-noise ratio
than average. Through simulations, we confirmed the predictions of the optimal strategy
with existing experimental data on visual search behavior, including the effect of distractor
heterogeneity [10], uncertainty in target?s features [5, 6], target-distractor discriminability
[10], linear separabilty effect [18, 19]. Our study complements the recent work on optimal
eye movement strategies [24]. While we focus on an early stage of visual processing optimal cue selection in order to create a saliency map with maximum SN R, their study
focuses on a later stage of visual processing - optimal saccade generation such that for a
given saliency map, the probability of subsequent target detection is maximized. Thus,
both optimal cue selection and saccade generation are necessary for optimal visual search.
Acknowledgements
This work was supported by the National Science Foundation, National Eye Institute, National Imagery and Mapping Agency, Zumberge Innovation Fund, and Charles Lee Powell Foundation.
References
[1] V Maljkovic and K Nakayama. Mem Cognit, 22(6):657?672, Nov 1994.
[2] J. M. Wolfe, S. J. Butcher, and M. Hyle. J Exp Psychol Hum Percept Perform, 29(2):483?502,
2003.
[3] S Treue and J C Martinez Trujillo. Nature, 399(6736):575?579, Jun 1999.
[4] J. C. Martinez-Trujillo and S. Treue. Curr Biol, 14(9):744?751, May 2004.
[5] J. M. Wolfe, T. S. Horowitz, N. Kenner, M. Hyle, and N. Vasan. Vision Res, 44(12):1411?1426,
Jun 2004.
[6] Timothy J Vickery, Li-Wei King, and Yuhong Jiang. J Vis, 5(1):81?92, Feb 2005.
[7] A. Triesman and J. Souther. Journal of Experimental Psychology: Human Perception and Performance, 14:107?141, 1986.
[8] A. Treisman and S. Gormican. Psychological Review 95, 1:15?48, 1988.
[9] R. Rosenholtz. Percept Psychophys, 63(3):476?489, Apr 2001.
[10] J Duncan and G W Humphreys. Psychological Rev, 96:433?458, 1989.
[11] A. L. Nagy and R. R. Sanchez. Journal of the Optical Society of America A 7, 7:1209?1217,
1990.
[12] H. Pashler. Percept Psychophys, 41(4):385?392, Apr 1987.
[13] K. Rayner and D. L. Fisher. Percept Psychophys, 42(1):87?100, Jul 1987.
[14] A. Treisman. J Exp Psychol Hum Percept Perform, 17(3):652?676, Aug 1991.
[15] J. Palmer, P. Verghese, and M. Pavel. Vision Res, 40(10-12):1227?1268, 2000.
[16] J. M. Wolfe, K. R. Cave, and S. L. Franzel. J. Exper. Psychol., 15:419?433, 1989.
[17] J. M. Wolfe. Psyonomic Bulletin and Review, 1(2):202?238, 1994.
[18] M. D?Zmura. Vision Research 31, 6:951?966, 1991.
[19] B. Bauer, P. Jolicoeur, and W. B. Cowan. Vision Research 36, 10:1439?1465, 1996.
[20] A. Treisman and G. Gelade. Cognitive Psychology, 12:97?136, 1980.
[21] B. Bauer, P. Jolicoeur, and W. B. Cowan. Vision Res, 36(10):1439?1465, May 1996.
[22] J. M. Wolfe, S. R. Friedman-Hill, M. I. Stewart, and K. M. O? Connell. J Exp Psychol Hum
Percept Perform, 18(1):34?49, Feb 1992.
[23] W. F. Alkhateeb, R. J. Morris, and K. H. Ruddock. Spat Vis, 5(2):129?141, 1990.
[24] J. Najemnik, W. S. Geisler. Nature, 434(7031):387?391, Mar 2005.
| 2802 |@word trial:14 middle:1 briefly:2 nd:2 proportionality:1 simulation:2 rayner:1 pavel:1 thereby:1 configuration:6 tuned:12 suppressing:1 existing:1 nt:2 si:5 activation:2 must:2 najemnik:1 subsequent:1 additive:1 plot:2 designed:1 fund:1 cue:34 item:5 maximised:2 ith:5 steepest:4 farther:1 cognit:1 detecting:2 boosting:2 provides:1 location:8 direct:1 become:1 consists:1 introduce:1 manner:2 indeed:1 expected:4 rapid:1 behavior:3 distractor:22 becomes:2 provided:1 moreover:1 formalizing:1 maximizes:1 medium:1 vidhya:1 evolved:2 finding:2 certainty:1 cave:1 every:1 exactly:1 unit:1 t1:7 local:1 jungle:1 sd:5 cheat:1 jiang:1 laurent:1 firing:1 gormican:1 discriminability:7 studied:1 suggests:1 palmer:1 statistically:1 averaged:1 decided:1 testing:1 block:2 powell:1 area:1 significantly:2 word:1 refers:1 suggest:2 get:2 cannot:1 selection:14 pashler:1 restriction:1 map:10 demonstrated:1 maximizing:2 attention:4 cluttered:2 independently:1 formulate:1 simplicity:1 array:13 counterintuitive:1 pfi:1 population:7 searching:3 franzel:1 target:88 deploy:2 hypothesis:1 wolfe:5 predicts:8 observed:1 role:1 inserted:1 counter:2 decrease:4 highest:1 movement:1 ran:1 mentioned:2 environment:3 agency:1 trained:2 rewrite:1 efficiency:1 joint:1 america:1 surrounding:2 separated:2 describe:2 tell:1 crowd:1 exhaustive:1 encoded:3 say:1 otherwise:1 gi:23 online:1 differentiate:1 spat:1 maximal:2 relevant:11 rapidly:1 intuitive:3 los:2 enhancement:1 optimum:1 r1:1 derive:1 received:1 aug:1 predicted:4 foliage:1 guided:2 human:4 enable:1 insert:1 around:3 credit:1 normal:1 exp:4 mapping:1 early:6 smallest:1 largest:1 create:1 successfully:2 gaussian:2 pn:3 avoid:1 boosted:2 maljkovic:1 treue:2 focus:2 verghese:1 check:1 contrast:1 baseline:2 detect:2 butcher:1 selects:1 pixel:1 overall:1 among:12 orientation:2 spatial:7 psychophysics:4 equal:1 identical:2 report:11 stimulus:6 simplify:1 others:2 t2:4 few:1 randomly:2 ve:1 national:3 zmura:1 usc:4 familiar:1 phase:6 curr:1 friedman:1 detection:3 investigate:2 highly:1 extreme:1 implication:1 necessary:1 re:3 edj:1 uncertain:1 psychological:2 increased:2 earlier:1 stewart:1 assignment:1 degi:1 optimally:5 reported:1 varies:2 discriminable:4 combined:2 st:7 peak:1 geisler:1 lee:1 treisman:3 quickly:3 na:1 imagery:1 cognitive:1 horowitz:1 derivative:1 itti:2 leading:2 li:1 account:2 star:1 depends:3 vi:2 multiplicative:2 performed:1 later:1 red:2 predator:1 jul:1 accuracy:3 percept:6 maximized:2 yield:1 saliency:11 yellow:2 sid:5 confirmed:1 gain:16 sampled:1 ask:1 distractors:38 color:1 knowledge:4 amplitude:1 appears:1 higher:5 stabilised:1 day:1 response:14 improved:1 maximally:1 wei:1 strongly:1 mar:1 furthermore:1 stage:3 until:2 eqn:2 receives:2 overlapping:2 lack:1 indicated:1 sjd:2 effect:8 verify:3 brown:1 hence:2 dgi:4 chromaticity:1 during:4 width:1 eqns:1 maintained:1 m:2 distracting:3 hill:1 occuring:1 complete:1 invisible:1 image:1 novel:1 fi:6 charles:1 tilt:2 significant:1 trujillo:2 tuning:5 grid:3 similarly:3 dj:4 had:1 cortex:2 similarity:1 gj:3 feb:2 recent:1 scenario:2 sjt:1 minimum:1 greater:1 impose:1 maximize:5 paradigm:2 signal:9 multiple:1 faster:2 divided:1 equally:1 paired:2 prediction:9 heterogeneous:1 vision:6 expectation:2 poisson:1 represent:3 normalization:1 cell:3 participated:1 aged:1 biased:1 rest:2 subject:15 recording:1 db:7 sanchez:1 cowan:2 presence:1 psychology:2 attracts:1 angeles:2 whether:4 accelerating:1 amount:1 clutter:1 mid:1 band:3 morris:1 processed:1 generate:2 rosenholtz:1 sign:1 blue:1 broadly:2 express:1 key:2 salient:3 four:3 prey:1 luminance:1 sum:1 uncertainty:6 respond:1 draw:1 duncan:1 display:4 strength:1 occur:1 ri:16 flat:2 generates:1 simulate:2 speed:1 connell:1 performing:1 separable:2 optical:1 department:2 according:4 separability:4 unity:1 rev:1 taken:1 wrt:1 reversal:1 observe:1 away:1 slower:1 top:12 assumes:1 society:1 psychophysical:1 hum:3 strategy:21 primary:1 nr:2 trivial:2 assuming:3 navalpakkam:1 relationship:1 ratio:11 minimizing:1 innovation:1 setup:1 design:3 reliably:2 unknown:5 perform:4 maximises:1 neuron:22 heterogeneity:5 situation:1 looking:1 variability:1 rn:1 complement:1 nonlinearly:1 boost:2 gelade:1 bar:1 below:1 pattern:1 perception:1 psychophys:3 biasing:1 including:2 reliable:1 natural:2 representing:1 scheme:1 eye:2 psychol:4 jun:2 sn:51 prior:1 review:2 acknowledgement:1 relative:2 permutation:1 generation:2 foundation:2 consistent:4 atleast:1 course:1 supported:1 salience:24 guide:2 bias:8 understand:1 nagy:1 institute:1 pni:1 face:1 bulletin:1 bauer:2 curve:4 dimension:4 feedback:1 world:1 instructed:1 boredom:1 ec:15 nov:1 preferred:4 reveals:1 mem:1 search:47 nature:2 exper:1 nakayama:1 investigated:1 complex:1 apr:2 stabilises:1 linearly:2 noise:12 martinez:2 repeated:4 jolicoeur:2 slow:2 n:2 sub:2 wish:2 humphreys:1 down:12 embed:1 shade:1 yuhong:1 list:1 survival:1 sit:6 evidence:1 consist:1 indiscriminately:1 false:1 demand:1 easier:1 timothy:1 visual:26 expressed:1 saccade:3 monotonic:1 extracted:1 goal:1 presentation:2 sized:1 king:1 flash:2 towards:1 fisher:1 tiger:2 change:1 hard:1 typical:1 corrected:1 total:2 secondary:1 experimental:5 formally:4 select:3 internal:3 phenomenon:2 tested:1 biol:1 |
1,985 | 2,803 | An aVLSI cricket ear model
Andr? van Schaik*
The University of Sydney
NSW 2006, AUSTRALIA
[email protected]
Richard Reeve+
University of Edinburgh
Edinburgh, UK
[email protected]
Craig Jin*
[email protected]
Tara Hamilton*
[email protected]
Abstract
Female crickets can locate males by phonotaxis to the mating song
they produce. The behaviour and underlying physiology has been
studied in some depth showing that the cricket auditory system
solves this complex problem in a unique manner. We present an
analogue very large scale integrated (aVLSI) circuit model of this
process and show that results from testing the circuit agree with
simulation and what is known from the behaviour and physiology
of the cricket auditory system. The aVLSI circuitry is now being
extended to use on a robot along with previously modelled neural
circuitry to better understand the complete sensorimotor pathway.
1
In trod u ction
Understanding how insects carry out complex sensorimotor tasks can help in the
design of simple sensory and robotic systems. Often insect sensors have evolved
into intricate filters matched to extract highly specific data from the environment
which solves a particular problem directly with little or no need for further
processing [1]. Examples include head stabilisation in the fly, which uses vision
amongst other senses to estimate self-rotation and thus to stabilise its head in flight,
and phonotaxis in the cricket.
Because of the narrowness of the cricket body (only a few millimetres), the
Interaural Time Difference (ITD) for sounds arriving at the two sides of the head is
very small (10?20?s). Even with the tympanal membranes (eardrums) located, as
they are, on the forelegs of the cricket, the ITD only reaches about 40?s, which is
too low to detect directly from timings of neural spikes. Because the wavelength of
the cricket calling song is significantly greater than the width of the cricket body the
Interaural Intensity Difference (IID) is also very low. In the absence of ITD or IID
information, the cricket uses phase to determine direction. This is possible because
the male cricket produces an almost pure tone for its calling song.
*
School of Electrical and Information Engineering,
Institute of Perception, Action and Behaviour.
+
Figure 1: The cricket auditory system. Four acoustic inputs channel sounds
directly or through tracheal tubes onto two tympanal membranes. Sound
from contralateral inputs has to pass a (double) central membrane (the
medial septum), inducing a phase delay and reduction in gain. The sound
transmission from the contralateral tympanum is very weak, making each
eardrum effectively a 3 input system.
The physics of the cricket auditory system is well understood [2]; the system (see
Figure 1) uses a pair of sound receivers with four acoustic inputs, two on the
forelegs, which are the external surfaces of the tympana, and two on the body, the
prothoracic or acoustic spiracles [3]. The connecting tracheal tubes are such that
interference occurs as sounds travel inside the cricket, producing a directional
response at the tympana to frequencies near to that of the calling song. The
amplitude of vibration of the tympana, and hence the firing rate of the auditory
afferent neurons attached to them, vary as a sound source is moved around the
cricket and the sounds from the different inputs move in and out of phase. The
outputs of the two tympana match when the sound is straight ahead, and the inputs
are bilaterally symmetric with respect to the sound source. However, when sound at
the calling song frequency is off-centre the phase of signals on the closer side comes
better into alignment, and the signal increases on that side, and conversely decreases
on the other. It is that crossover of tympanal vibration amplitudes which allows the
cricket to track a sound source (see Figure 6 for example).
A simplified version of the auditory system using only two acoustic inputs was
implemented in hardware [4], and a simple 8-neuron network was all that was
required to then direct a robot to carry out phonotaxis towards a species-specific
calling song [5].
A simple simulator was also created to model the behaviour of the auditory system
of Figure 1 at different frequencies [6]. Data from Michelsen et al. [2] (Figures 5
and 6) were digitised, and used together with average and ?typical? values from the
paper to choose gains and delays for the simulation. Figure 2 shows the model of the
internal auditory system of the cricket from sound arriving at the acoustic inputs
through to transmission down auditory receptor fibres. The simulator implements
this model up to the summing of the delayed inputs, as well as modelling the
external sound transmission.
Results from the simulator were used to check the directionality of the system at
different frequencies, and to gain a better understanding of its response. It was
impractical to check the effect of leg movements or of complex sounds in the
simulator due to the necessity of simulating the sound production and transmission.
An aVLSI chip was designed to implement the same model, both allowing more
complex experiments, such as leg movements to be run, and experiments to be run
in the real world.
Figure 2: A model of the auditory system of the cricket, used to build the
simulator and the aVLSI implementation (shown in boxes).
These experiments with the simulator and the circuits are being published in [6] and
the reader is referred to those papers for more details. In the present paper we
present the details of the circuits used for the aVLSI implementation.
2
Circuits
The chip, implementing the aVLSI box in Figure 2, comprises two all-pass delay
filters, three gain circuits, a second-order narrow-band band-pass filter, a first-order
wide-band band-pass filter, a first-order high-pass filter, as well as supporting
circuitry (including reference voltages, currents, etc.). A single aVLSI chip (MOSIS
tiny-chip) thus includes half the necessary circuitry to model the complete auditory
system of a cricket. The complete model of the auditory system can be obtained by
using two appropriately connected chips.
Only two all-pass delay filters need to be implemented instead of three as suggested
by Figure 2, because it is only the relative delay between the three pathways
arriving at the one summing node that counts. The delay circuits were implemented
with fully-differential gm-C filters. In order to extend the frequency range of the
delay, a first-order all-pass delay circuit was cascaded with a second-order all-pass
delay circuit. The resulting addition of the first-order delay and the second-order
delay allowed for an approximately flat delay response for a wider bandwidth as the
decreased delay around the corner frequency of the first-order filter cancelled with
the increased delay of the second-order filter around its resonant frequency. Figure 3
shows the first- and second-order sections of the all-pass delay circuit. Two of these
circuits were used and, based on data presented in [2], were designed with delays of
28?s and 62?s, by way of bias current manipulation. The operational transconductance amplifier (OTA) in figure 3 is a standard OTA which includes the
common-mode feedback necessary for fully differential designs. The buffers (Figure
3) are simple, cascoded differential pairs.
V+
V-
II+
V+
V-
II+
V+
V-
II+
V+
V-
II+
V+
V-
II+
V+
V-
II+
Figure 3: The first-order all-pass delay circuit (left) and the second-order
all-pass delay (right).
The differential output of the delay circuits is converted into a current which is
multiplied by a variable gain implemented as shown in Figure 4. The gain cell
includes a differential pair with source degeneration via transistors N4 and N5. The
source degeneration improves the linearity of the current. The three gain cells
implemented on the aVLSI have default gains of 2, 3 and 0.91 which are set by
holding the default input high and appropriately ratioing the bias currents through
the value of vbiasp. To correct any on-chip mismatches and/or explore other gain
configurations a current splitter cell [7] (p-splitter, figure 4) allows the gain to be
programmed by digital means post fabrication. The current splitter takes an input
current (Ibias, figure 4) and divides it into branches which recursively halve the
current, i.e., the first branch gives ? Ibias, the second branch ? Ibias, the third
branch 1/8 Ibias and so on. These currents can be used together with digitally
controlled switches as a Digital-to-Analogue converter. By holding default low and
setting C5:C0 appropriately, any gain ? from 4 to 0.125 ? can be set. To save on
output pins the program bits (C5:C0) for each of the three gain cells are set via a
single 18-bit shift register in bit-serial fashion.
Summing the output of the three gain circuits in the current domain simply involves
connecting three wires together. Therefore, a natural option for the filters that
follow is to use current domain filters. In our case we have chosen to implement
log-domain filters using MOS transistors operating in weak inversion. Figure 5
shows the basic building blocks for the filters ? the Tau Cell [8] and the multiplier
cell ? and block diagrams showing how these blocks were connected to create the
necessary filtering blocks. The Tau Cell is a log-domain filter which has the firstorder response:
I out
1
,
=
I in
s? + 1
where ? =
nC aVT
Ia
and n = the slope factor, VT = thermal voltage, Ca = capacitance, and Ia = bias
current. In figure 5, the input currents to the Tau Cell, Imult and A*Ia, are only used
when building a second-order filter. The multiplier cell is simply a translinear loop
where: I out1 ? I mult = I out 2 ? AI a or Imult = AIaIout2/Iout1. The configurations of the Tau
Cell to get particular responses are covered in [8] along with the corresponding
equations. The high frequency filter of Figure 2 is implemented by the high-pass
filter in Figure 5 with a corner frequency of 17kHz. The low frequency filter,
however, is divided into two parts since the biological filter?s response (see for
example Figure 3A in [9]) separates well into a narrow second-order band-pass filter
with a 10kHz resonant frequency and a wide band-pass filter made from a first-order
high-pass filter with a 3kHz corner frequency followed by a first-order low-pass
filter with a 12kHz corner frequency. These filters are then added together to
reproduce the biological filter. The filters? responses can be adjusted post
fabrication via their bias currents. This allows for compensation due to processing
and matching errors.
Figure 4: The Gain Cell above is used to convert the differential voltage
input from the delay cells into a single-ended current output. The gain of
each cell is controllable via a programmable current cell (p_splitter).
An on-chip bias generator [7] was used to create all the necessary current biases on
the chip. All the main blocks (delays, gain cells and filters), however, can have their
on-chip bias currents overridden through external pins on the chip.
The chip was fabricated using the MOSIS AMI 1.6?m technology and designed
using the Cadence Custom IC Design Tools (5.0.33).
3
Methods
The chip was tested using sound generated on a computer and played through a
soundcard to the chip. Responses from the chip were recorded by an oscilloscope,
and uploaded back to the computer on completion. Given that the output from the
chip and the gain circuits is a current, an external current-sense circuit built with
discrete components was used to enable the output to be probed by the oscilloscope.
Figure 5: The circuit diagrams for the log-domain filter building blocks ?
The Tau Cell and The Multiplier ? along with the block diagrams for the
three filters used in the aVLSI model.
Initial experiments were performed to tune the delays and gains. After that,
recordings were taken of the directional frequency responses. Sounds were
generated by computer for each chip input to simulate moving the forelegs by
delaying the sound by the appropriate amount of time; this was a much simpler
solution than using microphones and moving them using motors.
4
Results
The aVLSI chip was tested to measure its gains and delays, which were successfully
tuned to the appropriate values. The chip was then compared with the simulation to
check that it was faithfully modelling the system. A result of this test at 4kHz
(approximately the cricket calling-song frequency) is shown in Figure 6. Apart from
a drop in amplitude of the signal, the response of the circuit was very similar to that
of the simulator. The differences were expected because the aVLSI circuit has to
deal with real-world noise, whereas the simulated version has perfect signals.
Examples of the gain versus frequency response of the two log-domain band-pass
filters are shown in Figure 7. Note that the narrow-band filter peaks at 6kHz, which
is significantly above the mating song frequency of the cricket which is around
4.5kHz. This is not a mistake, but is observed in real crickets as well. As stated in
the introduction, a range of further testing results with both the circuit and the
simulator are being published in [6].
5
D i s c u s s i on
The aVLSI auditory sensor in this research models the hearing of the field cricket
Gryllus bimaculatus. It is a more faithful model of the cricket auditory system than
was previously built in [4], reproducing all the acoustic inputs, as well as the
responses to frequencies of both the co specific calling song and bat echolocation
chirps. It also generates outputs corresponding to the two sets of behaviourally
relevant auditory receptor fibres. Results showed that it matched the biological data
well, though there were some inconsistencies due to an error in the specification that
will be addressed in a future iteration of the design. A more complete
implementation across all frequencies was impractical because of complexity and
size issues as well as serving no clear behavioural purpose.
Figure 6: Vibration amplitude of the left (dotted) and right (solid) virtual
tympana measured in decibels in response to a 4kHz tone in simulation
(left) and on the aVLSI chip (right). The plot shows the amplitude of the
tympanal responses as the sound source is rotated around the cricket.
Figure 7: Frequency-Gain curves for the narrow-band and wide-band bandpass filters.
The long-term aim of this work is to better understand simple sensorimotor control
loops in crickets and other insects. The next step is to mount this circuitry on a robot
to carry out behavioural experiments, which we will compare with existing and new
behavioural data (such as that in [10]). This will allow us to refine our models of the
neural circuitry involved. Modelling the sensory afferent neurons in hardware is
necessary in order to reduce processor load on our robot, so the next revision will
include these either onboard, or on a companion chip as we have done before [11].
We will also move both sides of the auditory system onto a single chip to conserve
space on the robot.
It is our belief and experience that, as a result of this intelligent pre-processing
carried out at the sensor level, the neural circuits necessary to accurately model the
behaviour will remain simple.
Acknowledgments
The authors thank the Institute of Neuromorphic Engineering and the UK
Biotechnology and Biological Sciences Research Council for funding the research in
this paper.
References
[1] R. Wehner. Matched filters ? neural models of the external world. J Comp
Physiol A, 161: 511?531, 1987.
[2] A. Michelsen, A. V. Popov, and B. Lewis. Physics of directional hearing in
the cricket Gryllus bimaculatus. Journal of Comparative Physiology A,
175:153?164, 1994.
[3] A. Michelsen. The tuned cricket. News Physiol. Sci., 13:32?38, 1998.
[4] H. H. Lund, B. Webb, and J. Hallam. A robot attracted to the cricket
species Gryllus bimaculatus. In P. Husbands and I. Harvey, editors,
Proceedings of 4th European Conference on Artificial Life, pages 246?255.
MIT Press/Bradford Books, MA., 1997.
[5] R Reeve and B. Webb. New neural circuits for robot phonotaxis. Phil.
Trans. R. Soc. Lond. A, 361:2245?2266, August 2003.
[6] R. Reeve, A. van Schaik, C. Jin, T. Hamilton, B. Torben-Nielsen and B.
Webb Directional hearing in a silicon cricket. Biosystems, (in revision),
2005b
[7] T. Delbr?ck and A. van Schaik, Bias Current Generators with Wide
Dynamic Range, Analog Integrated Circuits and Signal Processing 42(2),
2005
[8] A. van Schaik and C. Jin, The Tau Cell: A New Method for the
Implementation of Arbitrary Differential Equations, IEEE International
Symposium on Circuits and Systems (ISCAS) 2003
[9] Kazuo Imaizumi and Gerald S. Pollack. Neural coding of sound frequency
by cricket auditory receptors. The Journal of Neuroscience, 19(4):1508?
1516, 1999.
[10] Berthold Hedwig and James F.A. Poulet. Complex auditory behaviour
emerges from simple reactive steering. Nature, 430:781?785, 2004.
[11] R. Reeve, B. Webb, A. Horchler, G. Indiveri, and R. Quinn. New
technologies for testing a model of cricket phonotaxis on an outdoor robot
platform. Robotics and Autonomous Systems, 51(1):41-54, 2005.
| 2803 |@word version:2 inversion:1 c0:2 simulation:4 out1:1 nsw:1 solid:1 recursively:1 carry:3 reduction:1 initial:1 configuration:2 necessity:1 tuned:2 existing:1 current:22 torben:1 attracted:1 physiol:2 ota:2 motor:1 designed:3 drop:1 medial:1 plot:1 half:1 tone:2 schaik:4 node:1 simpler:1 along:3 direct:1 differential:7 symposium:1 pathway:2 interaural:2 inside:1 manner:1 expected:1 intricate:1 oscilloscope:2 simulator:8 little:1 revision:2 underlying:1 matched:3 circuit:24 linearity:1 what:1 evolved:1 fabricated:1 impractical:2 ended:1 firstorder:1 uk:3 control:1 hamilton:2 producing:1 before:1 engineering:2 timing:1 understood:1 mistake:1 receptor:3 gryllus:3 mount:1 firing:1 approximately:2 wehner:1 au:3 studied:1 chirp:1 conversely:1 co:1 programmed:1 range:3 bat:1 unique:1 faithful:1 acknowledgment:1 testing:3 block:7 implement:3 crossover:1 physiology:3 significantly:2 mult:1 matching:1 pre:1 get:1 onto:2 phil:1 uploaded:1 pure:1 digitised:1 autonomous:1 gm:1 us:3 delbr:1 conserve:1 located:1 observed:1 fly:1 electrical:1 degeneration:2 connected:2 news:1 decrease:1 movement:2 digitally:1 environment:1 complexity:1 bimaculatus:3 dynamic:1 gerald:1 overridden:1 chip:21 ction:1 reeve:4 artificial:1 transistor:2 relevant:1 loop:2 inducing:1 moved:1 double:1 transmission:4 produce:2 comparative:1 perfect:1 rotated:1 help:1 wider:1 avlsi:14 ac:1 completion:1 measured:1 school:1 solves:2 sydney:1 implemented:6 soc:1 involves:1 come:1 direction:1 correct:1 filter:33 translinear:1 australia:1 enable:1 virtual:1 implementing:1 behaviour:6 biological:4 adjusted:1 around:5 ic:1 itd:3 mo:1 circuitry:6 vary:1 purpose:1 travel:1 council:1 vibration:3 create:2 successfully:1 tool:1 faithfully:1 mit:1 sensor:3 aim:1 ck:1 voltage:3 indiveri:1 modelling:3 check:3 detect:1 sense:1 stabilise:1 integrated:2 reproduce:1 issue:1 insect:3 platform:1 field:1 future:1 cadence:1 intelligent:1 richard:1 few:1 delayed:1 phase:4 iscas:1 amplifier:1 highly:1 custom:1 alignment:1 male:2 sens:1 closer:1 popov:1 trod:1 necessary:6 experience:1 divide:1 biosystems:1 pollack:1 increased:1 phonotaxis:5 neuromorphic:1 hearing:3 contralateral:2 delay:23 fabrication:2 too:1 peak:1 international:1 physic:2 off:1 connecting:2 together:4 central:1 ear:1 tube:2 recorded:1 choose:1 external:5 corner:4 book:1 converted:1 coding:1 includes:3 register:1 afferent:2 performed:1 option:1 slope:1 directional:4 modelled:1 weak:2 accurately:1 craig:2 iid:2 comp:1 straight:1 published:2 processor:1 reach:1 halve:1 andre:1 ed:1 husband:1 mating:2 sensorimotor:3 frequency:21 echolocation:1 involved:1 imaizumi:1 james:1 gain:21 auditory:18 emerges:1 improves:1 amplitude:5 nielsen:1 back:1 follow:1 response:14 done:1 box:2 though:1 bilaterally:1 flight:1 mode:1 building:3 effect:1 multiplier:3 imult:2 hence:1 symmetric:1 deal:1 self:1 width:1 complete:4 onboard:1 funding:1 common:1 rotation:1 attached:1 khz:8 extend:1 analog:1 silicon:1 ai:1 centre:1 moving:2 robot:8 specification:1 surface:1 operating:1 etc:1 showed:1 female:1 inf:1 apart:1 eardrum:2 manipulation:1 buffer:1 harvey:1 vt:1 inconsistency:1 life:1 greater:1 kazuo:1 steering:1 determine:1 signal:5 ii:6 branch:4 sound:21 match:1 long:1 divided:1 post:2 serial:1 controlled:1 basic:1 n5:1 vision:1 iteration:1 robotics:1 cell:17 addition:1 whereas:1 decreased:1 addressed:1 diagram:3 source:6 appropriately:3 recording:1 ee:3 near:1 switch:1 bandwidth:1 converter:1 reduce:1 shift:1 septum:1 song:9 biotechnology:1 action:1 programmable:1 covered:1 clear:1 tune:1 amount:1 band:10 hardware:2 stabilisation:1 andr:1 dotted:1 neuroscience:1 track:1 serving:1 discrete:1 probed:1 four:2 mosis:2 convert:1 fibre:2 run:2 almost:1 reader:1 resonant:2 bit:3 followed:1 played:1 refine:1 ahead:1 flat:1 calling:7 generates:1 simulate:1 lond:1 transconductance:1 membrane:3 across:1 remain:1 n4:1 making:1 leg:2 interference:1 taken:1 behavioural:3 equation:2 agree:1 previously:2 pin:2 count:1 avt:1 multiplied:1 appropriate:2 quinn:1 simulating:1 cancelled:1 save:1 include:2 build:1 move:2 capacitance:1 added:1 spike:1 occurs:1 cricket:32 usyd:3 amongst:1 separate:1 thank:1 simulated:1 sci:1 nc:1 webb:4 holding:2 stated:1 design:4 implementation:4 allowing:1 neuron:3 wire:1 jin:3 compensation:1 thermal:1 supporting:1 extended:1 head:3 delaying:1 locate:1 reproducing:1 arbitrary:1 august:1 intensity:1 pair:3 required:1 acoustic:6 narrow:4 trans:1 suggested:1 perception:1 mismatch:1 lund:1 program:1 built:2 including:1 tau:6 belief:1 analogue:2 ia:3 natural:1 cascaded:1 technology:2 splitter:3 created:1 carried:1 extract:1 understanding:2 poulet:1 relative:1 fully:2 filtering:1 versus:1 generator:2 digital:2 editor:1 tiny:1 production:1 arriving:3 side:4 bias:8 understand:2 allow:1 institute:2 wide:4 ibias:4 van:4 edinburgh:2 feedback:1 depth:1 default:3 world:3 curve:1 berthold:1 sensory:2 author:1 c5:2 made:1 simplified:1 robotic:1 receiver:1 summing:3 channel:1 nature:1 ca:1 controllable:1 operational:1 tympanal:4 complex:5 european:1 domain:6 behaviourally:1 main:1 noise:1 allowed:1 body:3 referred:1 fashion:1 comprises:1 bandpass:1 outdoor:1 third:1 down:1 companion:1 load:1 specific:3 decibel:1 showing:2 effectively:1 wavelength:1 explore:1 simply:2 lewis:1 ma:1 towards:1 absence:1 directionality:1 typical:1 ami:1 microphone:1 specie:2 pas:17 bradford:1 tara:2 internal:1 reactive:1 tested:2 |
1,986 | 2,804 | Learning Rankings via Convex Hull Separation
Glenn Fung, R?omer Rosales, Balaji Krishnapuram
Computer Aided Diagnosis, Siemens Medical Solutions USA, Malvern, PA 19355
{glenn.fung, romer.rosales, balaji.krishnapuram}@siemens.com
Abstract
We propose efficient algorithms for learning ranking functions from order constraints between sets?i.e. classes?of training samples. Our algorithms may be used for maximizing the generalized Wilcoxon Mann
Whitney statistic that accounts for the partial ordering of the classes: special cases include maximizing the area under the ROC curve for binary
classification and its generalization for ordinal regression. Experiments
on public benchmarks indicate that: (a) the proposed algorithm is at least
as accurate as the current state-of-the-art; (b) computationally, it is several orders of magnitude faster and?unlike current methods?it is easily
able to handle even large datasets with over 20,000 samples.
1
Introduction
Many machine learning applications depend on accurately ordering the elements of a set
based on the known ordering of only some of its elements. In the literature, variants of this
problem have been referred to as ordinal regression, ranking, and learning of preference
relations. Formally, we want to find a function f : ?n ? ? such that, for a set of test
samples {xk ? ?n }, the output of the function f (xk ) can be sorted to obtain a ranking. In
order to learn such a function we are provided with training data, A, containing S sets (or
SS
mj
classes) of training samples: A = j=1 (Aj = {xji }i=1
), where the j-th set Aj contains
PS
mj samples, so that we have a total of m = j=1 mj samples in A. Further, we are also
provided with a directed order graph G = (S, E) each of whose vertices corresponds to a
class Aj , and the existence of a directed edge EP Q ?corresponding to AP ? AQ ?means
that all training samples xp ? AP should be ranked higher than any sample xq ? AQ : i.e.
? (xp? AP , xq? AQ ), f (xp ) ? f (xq ).
In general the number of constraints on the ranking function grows as O(m2 ) so that naive
solutions are computationally infeasible even for moderate sized training sets with a few
thousand samples. Hence, we propose a more stringent problem with a larger (infinite) set
of constraints, that is nevertheless much more tractably solved. In particular, we modify
the constraints to: ? (xp ? CH(AP ), xq ? CH(AQ )), f (xp ) ? f (xq ), where CH(Aj )
denotes the set of all points in the convex hull of Aj .
We show how this leads to: (a) a family of approximations to the original problem; and (b)
considerably more efficient solutions that still enforce all of the original inter-group order
constraints. Notice that, this formulation subsumes the standard ranking problem (e.g. [4])
as a special case when each set Aj is reduced to a singleton and the order graph is equal to
{v,w,x}
{v,w}
{v,w}
{w}
{v}
{v,w,x}
{x}
{v,w}
{x}
{y,z}
(a)
{y,z}
{x}
(b)
{y,z}
{x}
(c)
{z}
{y}
(d)
{y}
{z}
(e)
{z}
{y}
(f)
Figure 1: Various instances of the proposed ranking problem consistent with the training set
{v, w, x, y, z} satisfying v > w > x > y > z. Each problem instance is defined by an order
graph. (a-d) A succession of order graphs with an increasing number of constraints (e-f) Two order
graphs defining the same partial ordering but different problem instances.
a full graph. However, as illustrated in Figure 1, the formulation is more general and does
not require a total ordering of the sets of training samples Aj , i.e. it allows any order graph
G to be incorporated into the problem.
1.1
Generalized Wilcoxon-Mann-Whitney Statistics
A distinction is usually made between classification and ordinal regression methods on
one hand, and ranking on the other. In particular, the loss functions used for classification
and ordinal regression evaluate whether each test sample is correctly classified: in other
words, the loss functions that are used to evaluate these algorithms?e.g. the 0?1 loss for
binary classification?are computed for every sample individually, and then averaged over
the training or test set.
By contrast, bipartite ranking solutions are evaluated using the Wilcoxon-Mann-Whitney
(WMW) statistic which measures the (sample averaged) probability that any pair of samples is ordered correctly; intuitively, the WMW statistic may be interpreted as the area
under the ROC curve (AUC). We define a slight generalization of the WMW statistic that
accounts for our notion of class-ordering:
Pmi Pmj
j
i
X k=1
l=1 ? f (xk ) < f (xl )
Pmi Pmj
W M W (f, A) =
.
k=1
l=1 1
E
ij
Hence, if a sample is individually misclassified because it falls on the wrong side of the
decision boundary between classes it incurs a penalty in ordinal regression, whereas, in
ranking, it may be possible that it is still correctly ordered with respect to every other test
sample, and thus it may incur no penalty in the WMW statistic.
1.2
Previous Work
Ordinal regression and methods for handling structured output classes: For a classic
description of generalized linear models for ordinal regression, see [11]. A non-parametric
Bayesian model for ordinal regression based on Gaussian processes (GP) was defined
[1]. Several recent machine learning papers consider structured output classes: e.g. [13]
presents SVM based algorithms for handling structured and interdependent output spaces,
and [5] discusses automatic document categorization into pre-defined hierarchies or taxonomies of topics.
Learning Rankings: The problem of learning rankings was first treated as a classification
problem on pairs of objects by Herbrich [4] and subsequently used on a web page ranking
task by Joachims [6]; a variety of authors have investigated this approach recently. The
major advantage of this approach is that it considers a more explicit notion of ordering?
However, the naive optimization strategy proposed there suffers from the O(m2 ) growth
in the number of constraints mentioned in the previous section. This computational burden renders these methods impractical even for medium sized datasets with a few thousand
samples. In other related work, boosting methods have been proposed for learning preferences [3], and a combinatorial structure called the ranking poset was used for conditional
modeling of partially ranked data[8], in the context of combining ranked sets of web pages
produced by various web-page search engines. Another, less related, approach is [2].
Relationship to the proposed work: Our algorithm penalizes wrong ordering of pairs of
training instances in order to learn ranking functions (similar to [4]), but in addition, it can
also utilize the notion of a structured class order graph. Nevertheless, using a formulation based on constraints over convex hulls of the training classes, our method avoids the
prohibitive computational complexity of the previous algorithms for ranking.
1.3
Notation and Background
In the following, vectors will be assumed to be column vectors unless transposed to a row
vector by a prime superscript ? . For a vector x in the n-dimensional real space ?n , the
cardinality of a set A will be denoted by #(A). The scalar (inner) product of two vectors x
and y in the n-dimensional real space ?n will be denoted by x? y and the 2-norm of x will
be denoted by kxk. For a matrix A ? ?m?n , Ai is the ith row of A which is a row vector in
?n , while A?j is the jth column of A. A column vector of ones of arbitrary dimension will
be denoted by e. For A ? ?m?n and B ? ?n?k , the kernel K(A, B) maps ?m?n ? ?n?k
into ?m?k . In particular, if x and y are column vectors in ?n then, K(x? , y) is a real
number, K(x? , A? ) is a row vector in ?m and K(A, A? ) is an m ? m matrix. The identity
matrix of arbitrary dimension will be denoted by I.
2
Convex Hull formulation
We are interested in learning a ranking function f : ?n ? ? given known ranking relationships between some training instances Ai , Aj ? A. Let the ranking relationships be
specified by a set E = {(i, j)|Ai ? Aj }
To begin with, let us consider the linearly separable binary ranking case which is equivalent
to the problem of classifying m points in the n-dimensional real space ?n , represented by
the m ? n matrix A, according to membership of each point x = Ai in the class A+ or A?
as specified by a given vector of labels d. In others words, for binary classifiers, we want a
linear ranking function fw (x) = w? x that satisfies the following constraints:
? (x+? A+ , x?? A? ), f (x? ) ? f (x+ ) ? f (x? )? f (x+ ) = w? x?? w? x+ ? ?1 ? 0.
(1)
Clearly, the number of constraints grows as O(m+ m? ), which is roughly quadratic in
the number of training samples (unless we have severe class imbalance). While easily
overcome?based on additional insights?in the separable problem, in the non-separable
case, the quadratic growth in the number of constraints poses huge computational burdens
on the optimization algorithm; indeed direct optimization with these constraints is infeasible even for moderate sized problems. We overcome this computational problem based on
three key insights that are explained below.
First, notice that (by negation) the feasibility constraints in (1) can also be defined as:
? (x+? A+ , x?? A? ), w? x??w? x+ ? ?1 ? ?(x+? A+ , x?? A? ), w? x??w? x+ > ?1.
In other words, a solution w is feasible iff there exist no pair of samples from the two
classes such that fw () orders them incorrectly.
Second, we will make the constraints in (1) more stringent: instead of requiring that equation (1) be satisfied for each possible pair (x+? A+ , x?? A? ) in the training set, we will
Figure 2: Example binary problem where points belonging to the A+ and A? sets are represented
by blue circles and red triangles respectively. Note that two elements xi and xj of the set A?
are not correctly ordered and hence generate positive values of the corresponding slack variables
yi and yj . Note that the point xk (hollow triangle) is in the convex hull of the set A? and hence
the corresponding yk error can be writen as a convex combination (yk = ?ki yi + ?kj yj ) of the two
nonzero errors corresponding to points of A?
require (1) to be satisfied for each pair (x+ ? CH(A+ ), x? ? CH(A? )), where CH(Ai )
denotes the convex hull of the set Ai [12]. Thus, our constraints become:
P
?
?
0 ? ?+ ? 1, P ?+ = 1
+
?
?(? , ? ) such that
, w? A? ??? w? A+ ?+ ? ?1.(2)
0 ? ?? ? 1, ?? = 1
Next, notice that all the linear inequality and equality constraints on (?+ , ?? ) may be
conveniently grouped together as B? ? b, where,
" +
#
" ?
#
+
?
0 m+ ?1
0 m? ?1
b
?
?
+
1
1
b=
?=
b =
b =
b?
?+ m?1
?1
?1
(m? +2)?1
(m+ +2)?1
(3)
"
#
"
#
?
0 ?Im+
?Im? 0
B
e?
e?
0
B+ = 0
B=
B? =
B + (m+4)?m
?
?
?e
0 (m? +2)?m
0
?e
(m+ +2)?m
(4)
Thus, our constraints on w can be written as:
?
?
?
?
?? s.t. B? ? b, w? A? ??? w? A+ ?+ ? ?1
? ?? s.t. B? ? b, w? A? ??? w? A+ ?+ > ?1
?
?
??
? ?u s.t. B u? w [A
+?
?
? A ] = 0, b u ? ?1, u ? 0,
(5)
(6)
(7)
Where the second equivalent form of the constraints was obtained by negation (as before),
and the third equivalent form results from our third key insight: the application of Farka?s
theorem of alternatives[9]. The resulting linear system of m equalities and m + 5 inequal2
ities in m + n + 4 variables can be used while minimizing any regularizer (such as kwk )
to obtain the linear ranking function that satisfies (1); notice, however, that we avoid the
O(m2 ) scaling in constraints.
2.1
The binary non-separable case
T
In the non-separable case, CH(A+ ) CH(A? ) 6= ? so the requirements have to be relaxed by introducing slack variables. To this end, we allow one slack variable yi ? 0
for each training sample xi , and consider the slack for any point inside the convex hull
CH(Aj ) to also be a convex combination of y (see Fig. 2). For example, this implies that
if only a subset of training samples have non-zero slacks yi> 0 (i.e. they are possibly misclassified), then the slacks of any points inside the convex hull also only depend on those
yi . Thus, our constraints now become:
?
?
?? s.t. B? ? b, w? A? ??? w? A+ ?+ ? ?1 + (?? y ?+ ?+ y + ), y +? 0, y ?? 0. (8)
Applying Farka?s theorem of alternatives, we get:
?
A? w
y
?
(2) ? ?u s.t. B u ?
+
= 0, b? u ? ?1, u ? 0
(9)
?A+ w
y+
Replacing B from equation (4) and defining u? = [u?
?
B + u+ + A+ w + y +
?? ?
?
?
B u ?A w+y
b+ u + + b? u ?
2.2
=
?
?
u+ ] ? 0 we get the constraints:
0,
(10)
= 0,
? ?1, u ? 0
(11)
(12)
The general ranking problem
Now we can extend the idea presented in the previous section for any given arbitrary directed order graph G = (S, E), as stated in the introduction, each of whose vertices corresponds to a class Aj and the existence of a directed edge Eij means that all training samples
xi ? Ai should be ranked higher than any sample xj ? Aj , that is:
f (xj ) ? f (xi ) ? f (xj ) ? f (xi ) = w? xj ? w? xi ? ?1 ? 0
(13)
Analogously we obtain the following set of equations that enforced the ordering between
sets Ai and Aj :
?
B i uij + Ai w + y i = 0
(14)
?
Bj u
?ij ? Aj w + y j
i ij
b u + bj u
?ij ? ?1
uij , u
?ij ? 0
=
0
(15)
(16)
(17)
It can be shown that using the definitions of B i ,B j ,bi ,bj and the fact that uij , u?ij ? 0,
equations (14) can be rewritten in the following way:
? ij + Ai w + y i ? 0
(18)
ij
j
j
?? ? A w + y ? 0
(19)
? ij + ?? ij ? ?1
(20)
yi , yj ? 0
(21)
where ? ij = bi uij and ?? ij = bj u
?ij . Note that enforcing the constraints defined above
indeed implies the desired ordering, since we have:
Ai w + y i ? ?? ij ? ?? ij + 1 ? ?? ij ? Aj w ? y j
It is also important to note the connection with Support Vector Machines (SVM) formulation [10, 14] for the binary case. If we impose the extra constraints ?? ij = ? + 1 and
?? ij = ? ?1, then equations (18) imply the constraints included in the standard primal SVM
formulation. To obtain a more general formulation,we can ?kernelize? equations (14) by
making a transformation of the variable w as: w = A? v, where v can be interpreted as an
arbitrary variable in Rm ,This transformation can be motivated by duality theory [10], then
equations (14) become:
? ij + Ai A? v + y i ? 0
(22)
?? ij ? Aj A? v + y j ? 0
(23)
ij
ij
? + ?? ? ?1
(24)
yi , yj ? 0
(25)
If we now replace the linear kernels Ai A? and Ai A? by more general kernels K(Ai , A? )
and K(Aj , A? ) we obtain a ?kernelized? version of equations (14)
? ij
?
? + K(Ai , A? )v + y i ? 0 ?
?
? ij
?
?? ? K(Aj , A? )v + y j ? 0
Eij ?
(26)
ij
ij
? ?1 ?
?
? ?i +j ??
?
y ,y
? 0
Given a graph G = (V, E) representing the ordering of the training data and using equations (26) , we present next, a general mathematical programming formulation the ranking
problem:
min
??(y) + R(v)
{v,y i ,? ij | (i,j)?E}
(27)
s.t.
Eij ?(i, j) ? E
Where ? is a given loss function for the slack variables y i and R(v) represents a regularizer
on the normal to the hyperplane v. For an arbitrary kernel K(x, x? ) the number of variables
of formulation (27) is 2 ? m + 2#(E) and the number of linear equations(excluding the
nonnegativity constraints) is m#(E) + #(E) = #(E)(m + 1). for a linear kernel i.e.
K(x, x? ) = xx? the number of variables of formulation (27) becomes m + n + 2#(E)
and the number of linear equations remains the same. When using a linear kernel and
2
using ?(x) = R(x) = kxk2 , the optimization problem (27) becomes a linearly constrained
quadratic optimization problem for which a unique solution exists due to the convexity of
the objective function:
2
min
{w,y i ,? ij | (i,j)?E}
s.t.
? kyk2 + 21 w? w
Eij
?(i, j) ? E
(28)
Unlike other SVM-like methods for ranking that need a O(m2 ) number of slack variables
y our formulation only require one slack variable for example, only m slack variables
are used, giving our formulation computational advantage over ranking methods. Next,
we demonstrate the effectiveness of our algorithm by comparing it to two state-of-the-art
algorithms.
3
Experimental Evaluation
We test tested our approach in a set of nine publicly available datasets 1 shown in Tab. 1
(several large datasets are not reported since only the algorithm presented in this paper was
able to run them). These datasets have been frequently used as a benchmark for ordinal
regression methods (e.g. [1]). Here we use them for evaluating ranking performance. We
compare our method against SVM for ranking (e.g. [4, 6]) using the SVM-light package 2
and an efficient Gaussian process method (the informative vector machine) 3 [7].
These datasets were originally designed for regression, thus the continuous target values
for each dataset were discretized into five equal size bins. We use these bins to define
our ranking constraints: all the datapoints with target value falling in the same bin were
grouped together. Each dataset was divided into 10% for testing and 90% for training.
Thus, the input to all of the algorithms tested was, for each point in the training set: (1) a
vector in ?n (where n is different for each set) and (2) a value from 1 to 5 denoting the
rank of the group to which it belongs.
Performance is defined in terms of the Wilcoxon statistic. Since we do not employ information about the ranking of the elements within each group, order constraints within a group
1
Available at http:\\www.liacc.up.pt\?
ltorgo\Regression\DataSets.html
http:\\www.cs.cornell.edu\People\tj\svm light\
3
http:\\www.dcs.shef.ac.uk\ neil\ivm\
2
Table 1: Benchmark Datasets
Name
m
n
Name
m
n
1 Abalone
2 Airplane Comp.
3 Auto-MPG
4 CA Housing
5 Housing-Boston
4177
950
392
20640
506
9
10
8
9
14
6 Machine-CPU
7 Pyrimidines
8 Triazines
9 WI Breast Cancer
209
74
186
194
7
28
61
33
Accuracy
Run time
1
3
10
0.8
2
10
0.7
Run time (Log?scale)
Generalized Wilcoxon statistic (AUC)
0.9
0.6
0.5
0.4
0.3
SVM?light
IVM
Proposed (full?graph)
Proposed (chain?graph)
0.2
0.1
0
1
2
3
4
5
6
Dataset number
1
10
0
10
?1
10
?2
10
7
8
9
1
2
3
4
5
6
Dataset number
7
8
9
Figure 3: Experimental comparison of the ranking SVM, IVM and the proposed method on nine
benchmark datasets. Along with the mean values in 10 fold cross-validation, the entire range of variation is indicated in the error-bars. (a) The overall accuracy for all the three methods is comparable.
(b) The proposed method has a much lower run time than the other methods, even for the full graph
case for medium to large size datasets. NOTE: Both SVM-light and IVM ran out of memory and
crashed on dataset 4; on dataset 1, SVM-light failed to complete even one fold after more than 24
hours of run time, so its results could not be compiled in time for submission.
cannot be verified.
P Letting b(m) = m(m ? 1)/2, the total number of order constraints is
equal to b(m) ? i b(mi ), where mi is the number of instances in group i.
The results for all of the algorithms are shown in Fig.3. Our formulation was tested employing two order graphs, the full directed acyclic graph and the chain graph. The performance
for all datasets is generally comparable or significantly better for our algorithm (when using a chain order graph). Note that the performance for the full graph is consistently lower
than that for the chain graph. Thus, interestingly enforcing more order constraints does not
necessarily imply better performance. We suspect that this is due to the role that the slack
variables play in both formulations, since the number of slack variables remains the same
while the number of constraints increases. Adding more slack variables may positively
affect performance in the full graph, but this comes at a computational cost. An interesting
problem is to find the right compromise. A different but potentially related problem is that
of finding good order graph given a dataset. Note also that the chain graph is much more
stable regarding performance overall. Regarding run-time, our algorithm runs an order of
magnitude faster than current implementations of state-of-the-art methods, even approximate ones (like IVM).
4
Discussions and future work
We propose a general method for learning a ranking function from structured order constraints on sets of training samples. The proposed algorithm was illustrated on benchmark
ranking problems with two different constraint graphs: (a) a chain graph; and (b) a full
ordering graph. Although a chain graph was more accurate in the experiments shown in
Figure 3, with either type of graph structure, the proposed method is at least as accurate (in
terms of the WMW statistic for ordinal regression) as state-of-the-art algorithms such as
the ranking-SVM and Gaussian Processes for ordinal regression.
Besides being accurate, the computational requirements of our algorithm scale much more
favorably with the number of training samples as compared to other state-of-the-art methods. Indeed it was the only algorithm capable of handling several large datasets, while the
other methods either crashed due to lack of memory or ran for so long that they were not
practically feasible. While our experiments illustrate only specific order graphs, we stress
that the method is general enough to handle arbitrary constraint relationships.
While the proposed formulation reduces the computational complexity of enforcing order constraints, it is entirely independent of the regularizer that is minimized (under these
constraints) while learning the optimal ranking function. Though we have used a simple
2
margin regularization (via kwk in (28), and RKHS regularization in (27) in order to learn
in a supervised setting, we can just as easily easily use a graph-Laplacian based regularizer that exploits unlabeled data, in order to learn in semi-supervised settings. We plan to
explore this in future work.
References
[1] W. Chu and Z. Ghahramani, Gaussian processes for ordinal regression, Tech. report, University
College London, 2004.
[2] K. Crammer and Y. Singer, Pranking with ranking, Neural Info. Proc. Systems, 2002.
[3] Y. Freund, R. Iyer, and R. Schapire, An efficient boosting algorithm for combining preferences,
Journal of Machine Learning Research 4 (2003), 933?969.
[4] R. Herbrich, T. Graepel, and K. Obermayer, Large margin rank boundaries for ordinal regression, Advances in Large Margin Classifiers (2000), 115?132.
[5] T. Hofmann, L. Cai, and M. Ciaramita, Learning with taxonomies: Classifying documents and
words, (NIPS) Workshop on Syntax, Semantics, and Statistics, 2003.
[6] T. Joachims, Optimizing search engines using clickthrough data, Proc. ACM Conference on
Knowledge Discovery and Data Mining (KDD), 2002.
[7] N. Lawrence, M. Seeger, and R. Herbrich, Fast sparse gaussian process methods: The informative vector machine, Neural Info. Proc. Systems, 2002.
[8] G. Lebanon and J. Lafferty, Conditional models on the ranking poset, Neural Info. Proc. Systems, 2002.
[9] O. L. Mangasarian, Nonlinear programming, McGraw?Hill, New York, 1969, Reprint: SIAM
Classic in Applied Mathematics 10, 1994, Philadelphia.
[10]
, Generalized support vector machines, Advances in Large Margin Classifiers, 2000,
pp. 135?146.
[11] P. McCullagh and J. Nelder, Generalized linear models, Chapman & Hall, 1983.
[12] R. T. Rockafellar, Convex analysis, Princeton University Press, Princeton, New Jersey, 1970.
[13] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun, Support vector machine learning for
interdependent and structured output spaces, Int.Conf. on Machine Learning, 2004.
[14] V. N. Vapnik, The nature of statistical learning theory, second ed., Springer, New York, 2000.
| 2804 |@word version:1 norm:1 triazine:1 incurs:1 contains:1 denoting:1 document:2 interestingly:1 rkhs:1 current:3 com:1 comparing:1 chu:1 written:1 informative:2 kdd:1 hofmann:2 designed:1 prohibitive:1 xk:4 ith:1 boosting:2 preference:3 herbrich:3 five:1 mathematical:1 along:1 direct:1 become:3 inside:2 inter:1 indeed:3 xji:1 roughly:1 mpg:1 frequently:1 discretized:1 cpu:1 cardinality:1 increasing:1 becomes:2 provided:2 begin:1 notation:1 xx:1 medium:2 interpreted:2 finding:1 transformation:2 impractical:1 every:2 growth:2 wrong:2 classifier:3 rm:1 uk:1 medical:1 positive:1 before:1 modify:1 ap:4 bi:2 range:1 averaged:2 directed:5 unique:1 yj:4 testing:1 poset:2 area:2 significantly:1 word:4 pre:1 krishnapuram:2 altun:1 get:2 cannot:1 unlabeled:1 tsochantaridis:1 context:1 applying:1 www:3 equivalent:3 map:1 maximizing:2 convex:11 m2:4 insight:3 datapoints:1 classic:2 handle:2 notion:3 variation:1 kernelize:1 hierarchy:1 target:2 pt:1 play:1 programming:2 pa:1 element:4 satisfying:1 balaji:2 submission:1 ep:1 role:1 solved:1 thousand:2 ordering:12 yk:2 mentioned:1 ran:2 convexity:1 complexity:2 depend:2 compromise:1 incur:1 bipartite:1 liacc:1 triangle:2 easily:4 various:2 represented:2 jersey:1 regularizer:4 fast:1 london:1 whose:2 larger:1 s:1 statistic:10 neil:1 gp:1 superscript:1 housing:2 advantage:2 cai:1 propose:3 product:1 combining:2 omer:1 iff:1 description:1 p:1 requirement:2 categorization:1 object:1 illustrate:1 ac:1 pose:1 ij:28 c:1 indicate:1 rosales:2 implies:2 come:1 hull:8 subsequently:1 stringent:2 public:1 mann:3 bin:3 require:3 generalization:2 im:2 practically:1 hall:1 normal:1 lawrence:1 bj:4 major:1 proc:4 combinatorial:1 label:1 individually:2 grouped:2 clearly:1 gaussian:5 avoid:1 cornell:1 joachim:3 consistently:1 rank:2 tech:1 contrast:1 seeger:1 pyrimidine:1 membership:1 entire:1 kernelized:1 relation:1 uij:4 misclassified:2 interested:1 semantics:1 overall:2 classification:5 html:1 denoted:5 plan:1 art:5 special:2 constrained:1 equal:3 chapman:1 represents:1 future:2 minimized:1 others:1 report:1 few:2 employ:1 negation:2 huge:1 mining:1 evaluation:1 severe:1 light:5 primal:1 tj:1 chain:7 accurate:4 edge:2 capable:1 partial:2 unless:2 penalizes:1 circle:1 desired:1 instance:6 column:4 modeling:1 whitney:3 cost:1 introducing:1 vertex:2 subset:1 reported:1 considerably:1 siam:1 pranking:1 together:2 analogously:1 satisfied:2 containing:1 ltorgo:1 possibly:1 conf:1 account:2 singleton:1 subsumes:1 rockafellar:1 int:1 ranking:37 kwk:2 tab:1 red:1 publicly:1 accuracy:2 succession:1 bayesian:1 accurately:1 produced:1 comp:1 classified:1 suffers:1 ed:1 definition:1 against:1 pp:1 mi:2 transposed:1 dataset:7 knowledge:1 graepel:1 higher:2 originally:1 supervised:2 formulation:15 evaluated:1 though:1 just:1 hand:1 web:3 replacing:1 nonlinear:1 lack:1 aj:18 indicated:1 grows:2 name:2 usa:1 requiring:1 hence:4 equality:2 regularization:2 nonzero:1 illustrated:2 kyk2:1 auc:2 abalone:1 generalized:6 syntax:1 stress:1 hill:1 complete:1 demonstrate:1 mangasarian:1 recently:1 extend:1 slight:1 ai:16 automatic:1 pmi:2 mathematics:1 aq:4 stable:1 compiled:1 wilcoxon:5 recent:1 optimizing:1 moderate:2 belongs:1 prime:1 inequality:1 binary:7 yi:7 additional:1 relaxed:1 impose:1 semi:1 full:7 reduces:1 faster:2 cross:1 long:1 divided:1 feasibility:1 laplacian:1 variant:1 regression:15 breast:1 kernel:6 whereas:1 want:2 addition:1 background:1 shef:1 extra:1 unlike:2 suspect:1 lafferty:1 effectiveness:1 enough:1 variety:1 xj:5 affect:1 inner:1 idea:1 regarding:2 airplane:1 whether:1 motivated:1 penalty:2 render:1 york:2 nine:2 generally:1 reduced:1 generate:1 http:3 schapire:1 exist:1 notice:4 correctly:4 blue:1 diagnosis:1 group:5 key:2 nevertheless:2 falling:1 verified:1 utilize:1 graph:29 enforced:1 run:7 package:1 family:1 separation:1 decision:1 scaling:1 comparable:2 entirely:1 ki:1 fold:2 quadratic:3 constraint:35 min:2 separable:5 structured:6 fung:2 according:1 combination:2 belonging:1 wi:1 making:1 intuitively:1 explained:1 computationally:2 pmj:2 equation:11 remains:2 discus:1 slack:13 singer:1 ordinal:13 letting:1 end:1 available:2 rewritten:1 enforce:1 romer:1 alternative:2 existence:2 original:2 denotes:2 include:1 exploit:1 giving:1 ghahramani:1 objective:1 parametric:1 strategy:1 obermayer:1 topic:1 considers:1 enforcing:3 besides:1 ciaramita:1 relationship:4 minimizing:1 taxonomy:2 potentially:1 favorably:1 info:3 stated:1 implementation:1 clickthrough:1 imbalance:1 datasets:12 benchmark:5 incorrectly:1 defining:2 incorporated:1 excluding:1 dc:1 arbitrary:6 pair:6 crashed:2 specified:2 connection:1 engine:2 distinction:1 hour:1 tractably:1 nip:1 able:2 bar:1 usually:1 below:1 memory:2 ranked:4 treated:1 representing:1 imply:2 reprint:1 naive:2 auto:1 philadelphia:1 kj:1 xq:5 literature:1 interdependent:2 discovery:1 freund:1 loss:4 interesting:1 acyclic:1 validation:1 xp:5 consistent:1 classifying:2 row:4 cancer:1 infeasible:2 jth:1 side:1 allow:1 fall:1 sparse:1 curve:2 boundary:2 dimension:2 overcome:2 avoids:1 evaluating:1 author:1 made:1 employing:1 lebanon:1 approximate:1 mcgraw:1 ities:1 assumed:1 nelder:1 xi:6 search:2 continuous:1 glenn:2 table:1 learn:4 mj:3 nature:1 ca:1 investigated:1 necessarily:1 linearly:2 positively:1 fig:2 malvern:1 referred:1 roc:2 nonnegativity:1 explicit:1 xl:1 kxk2:1 third:2 theorem:2 specific:1 svm:12 burden:2 exists:1 workshop:1 vapnik:1 adding:1 magnitude:2 iyer:1 margin:4 boston:1 eij:4 explore:1 conveniently:1 failed:1 kxk:1 ordered:3 partially:1 scalar:1 springer:1 ch:9 corresponds:2 ivm:5 satisfies:2 acm:1 conditional:2 sorted:1 sized:3 identity:1 replace:1 feasible:2 fw:2 aided:1 included:1 infinite:1 mccullagh:1 hyperplane:1 total:3 called:1 duality:1 experimental:2 siemens:2 formally:1 college:1 support:3 people:1 crammer:1 hollow:1 evaluate:2 princeton:2 tested:3 handling:3 |
1,987 | 2,805 | The Role of Top-down and Bottom-up Processes
in Guiding Eye Movements during Visual Search
Gregory J. Zelinsky?? , Wei Zhang? , Bing Yu? , Xin Chen?? , Dimitris Samaras?
Dept. of Psychology? , Dept. of Computer Science?
State University of New York at Stony Brook
Stony Brook, NY 11794
[email protected]? , [email protected]?
{wzhang,ybing,samaras}@cs.sunysb.edu?
Abstract
To investigate how top-down (TD) and bottom-up (BU) information is
weighted in the guidance of human search behavior, we manipulated the
proportions of BU and TD components in a saliency-based model. The
model is biologically plausible and implements an artificial retina and
a neuronal population code. The BU component is based on featurecontrast. The TD component is defined by a feature-template match to a
stored target representation. We compared the model?s behavior at different mixtures of TD and BU components to the eye movement behavior
of human observers performing the identical search task. We found that a
purely TD model provides a much closer match to human behavior than
any mixture model using BU information. Only when biological constraints are removed (e.g., eliminating the retina) did a BU/TD mixture
model begin to approximate human behavior.
1. Introduction
The human object detection literature, also known as visual search, has long struggled with
how best to conceptualize the role of bottom-up (BU) and top-down (TD) processes in guiding search behavior.1 Early theories of search assumed a pure BU feature decomposition of
the objects in an image, followed by the later reconstitution of these features into objects if
the object?s location was visited by spatially directed visual attention [1]. Importantly, the
direction of attention to feature locations was believed to be random in these early models,
thereby making them devoid of any BU or TD component contributing to the guidance of
attention to objects in scenes.
The belief in a random direction of attention during search was quashed by Wolfe and
colleague?s [2] demonstration of TD information affecting search guidance. According to
their guided-search model [3], preattentively available features from objects not yet bound
by attention can be compared to a high-level target description to generate signals indicating evidence for the target in a display. The search process can then use these signals to
1
In this paper we will refer to BU guidance as guidance based on task-independent signals arising
from basic neuronal feature analysis. TD guidance will refer to guidance based on information not
existing in the input image or proximal search stimulus, such as knowledge of target features or
processing constraints imposed by task instruction.
guide attention to display locations indicating the greatest evidence for the target. More
recent models of TD target guidance can accept images of real-world scenes as stimuli
and generate sequences of eye movements that can be directly compared to human search
behavior [4].
Purely BU models of attention guidance have also enjoyed a great deal of recent research interest. Building on the concept of a saliency map introduced in [5], these models attempt to
use biologically plausible computational primitives (e.g., center-surround receptive fields,
color opponency, winner-take-all spatial competition, etc.) to define points of high salience
in an image that might serve as attractors of attention. Much of this work has been discussed in the context of scene perception [6], but recently Itti and Koch [7] extended a
purely BU model to the task of visual search. They defined image saliency in terms of
intensity, color, and orientation contrast for multiple spatial scales within a pyramid. They
found that a saliency model based on feature-contrast was able to account for a key finding
in the behavioral search literature, namely very efficient search for feature-defined targets
and far less efficient search for targets defined by conjunctions of features [1].
Given the body of evidence suggesting both TD and BU contributions to the guidance of
attention in a search task, the logical next question to ask is whether these two sources of
information should be combined to describe search behavior and, if so, in what proportion?
To answer this question, we adopt a three-pronged approach. First, we implement two models of eye movements during visual search, one a TD model derived from the framework
proposed by [4] and the other a BU model based on the framework proposed by [7]. Second, we use an eyetracker to collect behavioral data from human observers so as to quantify
guidance in terms of the number of fixations needed to acquire a target. Third, we combine
the outputs of the two models in various proportions to determine the TD/BU weighting
best able to describe the number of search fixations generated by the human observers.
2. Eye movement model
Figure 1: Flow of processing through the model. Abbreviations: TD SM (top-down
saliency map); BU SM (bottom-up saliency map); SF(suggested fixation point); TSM
(thresholded saliency map); CF2HS (Euclidean distance between current fixation and
hotspot); SF2CF(Euclidean distance between suggested fixation and current fixation);
EMT (eye movement threshold); FT (foveal threshold).
In this section we introduce a computational model of eye movements during visual search.
The basic flow of processing in this model is shown in Figure 1. Generally, we repre-
sent search scenes in terms of simple and biologically-plausible visual feature-detector
responses (colors, orientations, scales). Visual routines then act on these representations to
produce a sequence of simulated eye movements. Our framework builds on work described
in [8, 4], but differs from this earlier model in several important respects. First, our model
includes a perceptually-accurate simulated retina, which was not included in [8, 4]. Second, the visual routine responsible for moving gaze in our model is fundamentally different
from the earlier version. In [8, 4], the number of eye movements was largely determined
by the number of spatial scale filters used in the representation. The method used in the
current model to generate eye movements (Section 2.3) removes this upper limit. Third,
and most important to the topic of this paper, the current model is capable of integrating
both BU and TD information in guiding search behavior. The [8, 4] model was purely TD.
2.1. Overview
The model can be conceptually divided into three broad stages: (1) the creation of a saliency
map (SM) based on TD and BU analysis of a retinally-transformed image, (2) recognizing
the target, and (3) the operations required to generate eye movements. Within each of these
stages are several more specific operations, which we will now describe briefly in an order
determined by the processing flow.
Input image: The model accepts as input a high-resolution (1280 ? 960 pixel) image of
the search scene, as well as a smaller image of the search target. A point is specified on the
target image and filter responses are collected from a region surrounding this point. In the
current study this point corresponded to the center of the target image.
Retina transform: The search image is immediately transformed to reflect the acuity limitations imposed by the human retina. To implement this neuroanatomical constraint, we
adopt a method described in [9], which was shown to provide a good fit to acuity limitations
in the human visual system. The approach takes an image and a fixation point as input, and
outputs a retina-transformed version of the image based on the fixation point (making it a
good front-end to our model). The initial retina transformation assumes fixation at the center of the image, consistent with the behavioral experiment. A new retina transformation of
the search image is conducted after each change in gaze.
Saliency maps: Both the TD and the BU saliency maps are based on feature responses
from Gaussian filters of different orientations, scales, colors, and orders. These two maps
are then combined to create the final SM used to guide search (see Section 2.2 for details).
Negativity map: The negativity map keeps a spatial record of every nontarget location that
was fixated and rejected through the application of Gaussian inhibition, a process similar
to inhibition of return [10] that we refer to as ?zapping?. The existence of such a map is
supported by behavioral evidence indicating a high-capacity spatial memory for rejected
nontargets in a search task [11].
Find hotspot: The hotspot (HS) is defined as the point on the saliency map having the
largest saliency value. Although no biologically plausible mechanism for isolating the
hotspot is currently used, we assume that a standard winner-take-all (WTA) algorithm can
be used to find the SM hotspot.
Recognition thresholds: Recognition is accomplished by comparing the hotspot value
with two thresholds. The model terminates with a target-present judgment if the hotspot
value exceeds a high target-present threshold, set at .995 in the current study. A targetabsent response is made if the hotspot value falls below a low target-absent threshold (not
used in the current study). If neither of these termination criteria are satisfied, processing
passes to the eye movement stage.
Foveal threshold: Processing in the eye movement stage depends on whether the model?s
simulated fovea is fixated on the SM hotspot. This event is determined by computing
the Euclidean distance between the current location of the fovea?s center and the hotspot
(CF2HS), then comparing this distance to a foveal threshold (FT). The FT, set at 0.5 deg
of visual angle, is determined by the retina transform and viewing angle and corresponds
to the radius of the foveal window size. The foveal window is the region of the image
not blurred by the retina transform function, much like the high-resolution foveola in the
human visual system.
Hotspot out of fovea: If the hotspot is not within the FT, meaning that the object giving rise
to the hotspot is not currently fixated, then the model will make an eye movement to bring
the simulated fovea closer to the hotspot?s location. In making this movement, the model
will be effectively canceling the effect of the retina transform, thereby enabling a judgment
regarding the hotspot pattern. The destination of the eye movement is computed by taking
the weighted centroid of activity on the thresholded saliency map (TSM). See Section 2.3
for additional details regarding the centroid calculation of the suggested fixation point (SF),
its relationship to the distance threshold for generating an eye movement (EMT), and the
dynamically-changing threshold used to remove those SM points offering the least evidence
for the target (+SM thresh).
Hotspot at fovea: If the simulated fovea reaches the hotspot (CF2HS < FT) and the target
is still not detected (HS < target-present threshold), the model is likely to have fixated
a nontarget. When this happens (a common occurrence in the course of a search), it is
desirable to inhibit the location of this false target so as not to have it re-attract attention or
gaze. To accomplish this, we inhibit or ?zap? the hotspot by applying a negative Gaussian
filter centered at the hotspot location (set at 63 pixels). Following this injection of negativity
into the SM, a new eye movement is made based on the dynamics outlined in Section 2.3.
2.2. Saliency map creation
The first step in creating the TD and BU saliency maps is to separate the retina-transformed
image into an intensity channel and two opponent-process color channels (R-G and BY). For each channel, we then extract visual features by applying a set of steerable 2D
Gaussian-derivative filters, G(t, ?, s), where t is the order of the Gaussian kernel, ? is
the orientation, and s is the spatial scale. The current model uses first and second order
Gaussians, 4 orientations (0, 45, 90 and 180 degrees), and 3 scales (7, 15 and 31 pixels),
for a total of 24 filters. We therefore obtain 24 feature maps of filter responses per channel,
M (t, ?, s), or alternatively, a 72-dimensional feature vector, F , for each pixel in the retinatransformed image.
The TD saliency map is created by correlating the retina-transformed search image with
the target feature vector Ft .2
To maintain consistency between the two saliency map representations, the same channels
and features used in the TD saliency map were also used to create the BU saliency map.
Feature-contrast signals on this map were obtained directly from the responses of the Gaussian derivative filters. For each channel, the 24 feature maps were combined into a single
map according to:
X
N (|M (t, ?, s)|)
(1)
t,?,s
where N (?) is the normalization function described in [12]. The final BU saliency map
is then created by averaging the three combined feature maps. Note that this method of
creating a BU saliency map differs from the approach used in [12, 7] in that our filters
consisted of 1st and 2nd order derivatives of Gaussians and not center-surround DoG filters.
While the two methods of computing feature contrast are not equivalent, in practice they
yield very similar patterns of BU salience.
2
Note that because our TD saliency maps are derived from correlations between target and scene
images, the visual statistics of these images are in some sense preserved and might be described as a
BU component in our model. Nevertheless, the correlation-based guidance signal requires knowledge
of a target (unlike a true BU model), and for this reason we will continue to refer to this as a TD
process.
Finally, the combined SM was simply a linear combination of the TD and BU saliency
maps, where the weighting coefficient was a parameter manipulated in our experiments.
2.3. Eye movement generation
Our model defines gaze position at each moment in time by the weighted spatial average
(centroid) of signals on the SM, a form of neuronal population code for the generation
of eye movement [13, 14]. Although a centroid computation will tend to bias gaze in
the direction of the target (assuming that the target is the maximally salient pattern in the
image), gaze will also be pulled away from the target by salient nontarget points. When
the number of nontarget points is large, the eye will tend to move toward the geometric
center of the scene (a tendency referred to in the behavioral literature as the global effect,
[15, 16]); when the number of points is small, the eye will move more directly to the target.
To capture this activity-dependent eye movement behavior, we introduce a moving threshold, ?, that excludes points from the SM over time based on their signal strength. Initially ?
will be set to zero, allowing every signal on the SM to contribute to the centroid gaze computation. However, with each timestep, ? is increased by .001, resulting in the exclusion of
minimally salient points from the SM (+ SM thresh in Figure 1). The centroid of the SM,
what we refer to as the suggested fixation point (SF), is therefore dependent on the current
value of ? and can be expressed as:
X pSp
P .
(2)
SF =
Sp
Sp >?
Eventually, only the most salient points will remain on the thresholded saliency map
(TSM), resulting in the direction of gaze to the hotspot. If this hotspot is not the target,
? can be decreased (- SM thresh in Figure 1) after zapping in order to reintroduce points
to the SM. Such a moving threshold is a plausible mechanism of neural computation easily
instantiated by a simple recurrent network [17].
In order to prevent gaze from moving with each change in ?, which would result in an
unrealistically large number of very small eye movements, we impose an eye movement
threshold (EMT) that prevents gaze from shifting until a minimum distance between SF
and CF is achieved (SF2CF > EMT in Figure 1). The EMT is based on the signal and
noise characteristics of each retina-transformed image, and is defined as:
EM T = max (F T, d(1 + Cd log
Signal
)),
N oise
(3)
where F T is the fovea threshold, C is a constant, and d is the distance between the current
fixation and the hotspot. The Signal term is defined as the sum of all foveal saliency
values on the TSM; the N oise term is defined as the sum of all other TSM values. The
Signal/Noise log ratio is clamped to the range of [?1/C, 0]. The lower bound of the SF2CF
distance is F T , and the upper bound is d. The eye movement dynamics can therefore be
summarized as follows: incrementing ? will tend to increase the SF2CF distance, which
will result in an eye movement to SF once this distance exceeds the EMT.
3. Experimental methods
For each trial, the two human observers and the model were first shown an image of a
target (a tank). In the case of the human observers, the target was presented for one second
and presumably encoded into working memory. In the case of the model, the target was
represented by a single 72-dimensional feature vector as described in Section 2. A search
image was then presented, which remained visible to the human observers until they made
a button press response. Eye movements were recorded during this interval using an ELII
eyetracker. Section 2 details the processing stages used by the model. There were 44
images and targets, which were all modified versions of images in the TNO dataset [18].
The images subtended approximately 20? on both the human and simulated retinas.
4. Experimental results
Model and human data are reported from 2 experiments. For each experiment we tested 5
weightings of TD and BU components in the combined SM. Expressed as a proportion of
the BU component, these weightings were: BU 0 (TD only), BU .25, BU .5, BU .75, and
BU 1.0 (BU only).
4.1. Experiment 1
Table 1: Human and model search behavior at 5 TD/BU mixtures in Experiment 1.
Retina
Population
Misses (%)
Fixations
Std Dev
Human subjects
H1
H2
0.00
0.00
4.55
4.43
0.88
2.15
TD only
0.00
4.55
0.82
BU: 0.25
36.36
18.89
10.44
Model
BU: 0.5
72.73
20.08
12.50
BU: 0.75
77.27
21.00
10.29
BU only
88.64
22.40
12.58
Figure 2: Comparison of human and model scanpaths at different TD/BU weightings.
As can be seen from Table 1, the human observers were remarkably consistent in their
behavior. Each required an average of 4.5 fixations to find the target (defined as gaze
falling within .5 deg of the target?s center), and neither generated an error (defined by a
failure to find the target within 40 fixations). Human target detection performance was
matched almost exactly by a pure TD model, both in terms of errors (0%) and fixations
(4.55). This exceptional match between human and model disappeared with the addition
of a BU component. Relative to the human and TD model, a BU 0.25 mixture model
resulted in a dramatic increase in the miss rate (36%) and in the average number of fixations
needed to acquire the target (18.9) on those trials in which the target was ultimately fixated.
These high miss and fixation rates continued to increase with larger weightings of the BU
contribution, reaching an unrealistic 89% misses and 22 fixations with a pure BU model.
Figure 2 shows representative eye movement scanpaths from our two human observers (a)
and the model at three different TD/BU mixtures (b, BU 0; c, BU 0.5; d, BU 1.0) for one
image. Note the close agreement between the human scanpaths and the behavior of the
TD model. Note also that, with the addition of a BU component, the model?s eye either
wanders to high-contrast patterns (bushes, trees) before landing on the target (c), or misses
the target entirely (d).
4.2. Experiment 2
Recently, Navalpakkam & Itti [19] reported data from a saliency-based model also integrating BU and TD information to guide search. Among their many results, they compared
their model to the purely TD model described in [4] and found that their mixture model
offered a more realistic account of human behavior. Specifically, they observed that the [4]
model was too accurate, often predicting that the target would be fixated after only a single
eye movement. Although our current findings would seem to contradict [19]?s result, this
is not the case. Recall from Section 2.0 that our model differs from [4] in two respects:
(1) it retinally transforms the input image with each fixation, and (2) it uses a thresholded
population-averaging code to generate eye movements. Both of these additions would be
expected to increase the number of fixations made by the current model relative to the TD
model described in [4]. Adding a simulated retina should increase the number of fixations
by reducing the target-scene TD correlations and increasing the probability of false targets
emerging in the blurred periphery. Adding population averaging should increase fixations
by causing eye movements to locations other than hotspots. It may therefore be the case
that [19]?s critique of [4] may be pointing out two specific weaknesses of [4]?s model rather
than a general weakness of their TD approach.
To test this hypothesis, we disabled the artificial retina and the population averaging code
in our current model. The model now moves directly from hotspot to hotspot, zapping each
before moving to the next. Without retinal blurring and population averaging, the behavior
of this simpler model is now driven entirely by a WTA computation on the combined SM.
Moreover, with a BU weighting of 1.0, this version of our model now more closely approximates other purely BU models in the literature that also lack retinal acuity limitations and
population dynamics.
Table 2: Human and model search behavior at 5 TD/BU mixtures in Experiment 2.
NO Retina
NO Population
Misses (%)
Fixations
Std Dev
Human subjects
H1
H2
0.00
0.00
4.55
4.43
0.88
2.15
TD only
0.00
1.00
0.00
BU: 0.25
9.09
8.73
9.15
Model
BU: 0.5
27.27
16.60
12.29
BU: 0.75
56.82
13.37
9.20
BU only
68.18
14.71
12.84
Table 2 shows the data from this experiment. The first two columns replot the human data
from Table 1. Consistent with [19], we now find that the performance of a purely TD model
is too good. The target is consistently fixated after only a single eye movement, unlike the
4.5 fixations averaged by human observers. Also consistent with [19] is the observation that
a BU contribution may assist this model in better characterizing human behavior. Although
a 0.25 BU weighting resulted in a doubling of the human fixation rate and 9% misses, it
is conceivable that a smaller BU weighting could nicely describe human performance. As
in Experiment 1, at larger BU weightings the model again generated unrealistically high
error and fixation rates. These results suggest that, in the absence of retinal and neuronal
population-averaging constraints, BU information may play a small role in guiding search.
5. Conclusions
To what extent is TD and BU information used to guide search behavior? The findings from
Experiment 1 offer a clear answer to this question: when biologically plausible constraints
are considered, any addition of BU information to a purely TD model will worsen, not
improve, the match to human search performance (see [20] for a similar conclusion applied
to a walking task). The findings from Experiment 2 are more open to interpretation. It may
be possible to devise a TD model in which adding a BU component might prove useful,
but doing this would require building into this model biologically implausible assumptions.
A corollary to this conclusion is that, when these same biological constraints are added to
existing BU saliency-based models, these models may no longer be able to describe human
behavior.
A final fortuitous finding from this study is the surprising degree of agreement between our
purely TD model and human performance. The fact that this agreement was obtained by
direct comparison to human behavior (rather than patterns reported in the behavioral literature), and observed in eye movement variables, lends validity to our method. Future work
will explore the generality of our TD model, extending it to other forms of TD guidance
(e.g., scene context) and tasks in which a target may be poorly defined (e.g., categorical
search).
Acknowledgments
This work was supported by a grant from the ARO (DAAD19-03-1-0039) to G.J.Z.
References
[1] A. Treisman and G. Gelade. A feature-integration theory of attention. Cognitive Psychology,
12:97?136, 1980.
[2] J. Wolfe, K. Cave, and S. Franzel. Guided search: An alternative to the feature integration model
for visual search. Journal of Experimental Psychology: Human Perception and Performance,
15:419?433, 1989.
[3] J. Wolfe. Guided search 2.0: A revised model of visual search. Psychonomic Bulletin and
Review, 1:202?238, 1994.
[4] R. Rao, G. Zelinsky, M. Hayhoe, and D. Ballard. Eye movements in iconic visual search. Vision
Research, 42:1447?1463, 2002.
[5] C. Koch and S. Ullman. Shifts of selective visual attention: Toward the underlying neural
circuitry. Human Neurobiology, 4:219?227, 1985.
[6] L. Itti and C. Koch. Computational modeling of visual attention. Nature Reviews Neuroscience,
2(3):194?203, 2001.
[7] L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shift of visual
attention. Vision Research, 40(10-12):1489?1506, 2000.
[8] R. Rao, G. Zelinsky, M. Hayhoe, and D. Ballard. Modeling saccadic targeting in visual search.
In NIPS, 1995.
[9] J.S. Perry and W.S. Geisler. Gaze-contingent real-time simulation of arbitrary visual fields. In
SPIE, 2002.
[10] R. M. Klein and W.J. MacInnes. Inhibition of return is a foraging facilitator in visual search.
Psychological Science, 10(4):346?352, 1999.
[11] C. A. Dickinson and G. Zelinsky. Marking rejected distractors: A gaze-contingent technique
for measuring memory during search. Psychonomic Bulletin and Review, In press.
[12] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene
analysis. PAMI, 20(11):1254?1259, 1998.
[13] T. Sejnowski. Neural populations revealed. Nature, 332:308, 1988.
[14] C. Lee, W. Rohrer, and D. Sparks. Population coding of saccadic eye movements by neurons in
the superior colliculus. Nature, 332:357?360, 1988.
[15] J. Findlay. Global visual processing for saccadic eye movements. Vision Research, 22:1033?
1045, 1982.
[16] G. Zelinsky, R. Rao, M. Hayhoe, and D. Ballard. Eye movements reveal the spatio-temporal
dynamics of visual search. Psychological Science, 8:448?453, 1997.
[17] J. L. Elman. Finding structures in time. Cognitive Science, 14:179?211, 1990.
[18] A. Toet, P. Bijl, F. L. Kooi, and J. M. Valeton. A high-resolution image dataset for testing search
and detection models. Technical Report TNO-NM-98-A020, TNO Human Factors Research
Institute,, Soesterberg, The Netherlands, 1998.
[19] V. Navalpakkam and L Itti. Modeling the influence of task on attention. Vision Research,
45:205?231, 2005.
[20] K. A. Turano, D. R. Geruschat, and F. H. Baker. Oculomotor strategies for direction of gaze
tested with a real-world activity. Vision Research, 43(3):333?346, 2003.
| 2805 |@word h:2 trial:2 version:4 eliminating:1 briefly:1 proportion:4 nd:1 open:1 instruction:1 termination:1 simulation:1 decomposition:1 dramatic:1 thereby:2 fortuitous:1 moment:1 initial:1 foveal:6 offering:1 existing:2 current:14 comparing:2 surprising:1 yet:1 stony:2 visible:1 realistic:1 zap:1 remove:2 record:1 stonybrook:1 provides:1 contribute:1 location:9 simpler:1 zhang:1 direct:1 rohrer:1 zapping:3 prove:1 fixation:27 combine:1 behavioral:6 introduce:2 expected:1 rapid:1 behavior:20 elman:1 td:48 window:2 increasing:1 begin:1 matched:1 moreover:1 underlying:1 baker:1 what:3 emerging:1 finding:6 transformation:2 cave:1 temporal:1 every:2 act:1 exactly:1 grant:1 before:2 limit:1 nontargets:1 critique:1 approximately:1 pami:1 might:3 minimally:1 dynamically:1 collect:1 range:1 averaged:1 directed:1 acknowledgment:1 responsible:1 testing:1 practice:1 implement:3 differs:3 daad19:1 steerable:1 integrating:2 suggest:1 close:1 targeting:1 context:2 applying:2 influence:1 landing:1 equivalent:1 imposed:2 map:29 center:7 primitive:1 attention:16 resolution:3 spark:1 immediately:1 pure:3 continued:1 importantly:1 population:12 franzel:1 target:45 play:1 dickinson:1 us:2 hypothesis:1 agreement:3 wolfe:3 recognition:2 walking:1 std:2 bottom:4 role:3 ft:6 observed:2 capture:1 region:2 reintroduce:1 movement:36 removed:1 inhibit:2 dynamic:4 ultimately:1 creation:2 samara:2 purely:9 serve:1 blurring:1 easily:1 various:1 represented:1 surrounding:1 instantiated:1 describe:5 sejnowski:1 artificial:2 detected:1 corresponded:1 encoded:1 larger:2 plausible:6 statistic:1 transform:4 final:3 sequence:2 nontarget:4 aro:1 causing:1 poorly:1 description:1 competition:1 extending:1 produce:1 generating:1 disappeared:1 object:7 recurrent:1 c:1 quantify:1 direction:5 guided:3 radius:1 closely:1 filter:10 centered:1 human:40 viewing:1 require:1 pronged:1 biological:2 koch:5 considered:1 ic:1 great:1 presumably:1 pointing:1 circuitry:1 early:2 adopt:2 overt:1 currently:2 visited:1 largest:1 create:2 exceptional:1 weighted:3 hotspot:25 gaussian:6 modified:1 reaching:1 rather:2 conjunction:1 corollary:1 derived:2 acuity:3 iconic:1 consistently:1 contrast:5 centroid:6 sense:1 dependent:2 attract:1 accept:1 initially:1 transformed:6 selective:1 pixel:4 tank:1 among:1 orientation:5 spatial:7 conceptualize:1 integration:2 field:2 once:1 having:1 nicely:1 identical:1 broad:1 yu:1 future:1 report:1 stimulus:2 fundamentally:1 retina:19 manipulated:2 resulted:2 retinally:2 attractor:1 maintain:1 attempt:1 detection:3 interest:1 investigate:1 weakness:2 mixture:8 accurate:2 closer:2 capable:1 tree:1 euclidean:3 re:1 opponency:1 guidance:13 isolating:1 psychological:2 increased:1 column:1 earlier:2 modeling:3 rao:3 dev:2 measuring:1 recognizing:1 conducted:1 front:1 too:2 stored:1 reported:3 answer:2 foraging:1 gregory:2 proximal:1 accomplish:1 combined:7 st:1 devoid:1 geisler:1 bu:67 destination:1 lee:1 gaze:14 treisman:1 sunysb:2 reflect:1 satisfied:1 recorded:1 zelinsky:6 again:1 nm:1 cognitive:2 creating:2 derivative:3 itti:6 return:2 ullman:1 account:2 suggesting:1 retinal:3 summarized:1 coding:1 includes:1 coefficient:1 blurred:2 depends:1 later:1 h1:2 observer:9 doing:1 repre:1 worsen:1 contribution:3 largely:1 characteristic:1 judgment:2 saliency:29 yield:1 conceptually:1 niebur:1 detector:1 implausible:1 reach:1 canceling:1 failure:1 colleague:1 spie:1 dataset:2 ask:1 logical:1 recall:1 knowledge:2 color:5 distractors:1 routine:2 response:7 wei:1 maximally:1 generality:1 rejected:3 stage:5 subtended:1 correlation:3 until:2 working:1 lack:1 perry:1 defines:1 reveal:1 disabled:1 building:2 effect:2 validity:1 concept:1 consisted:1 true:1 spatially:1 deal:1 during:6 criterion:1 covert:1 bring:1 image:32 meaning:1 recently:2 common:1 superior:1 psychonomic:2 emt:6 overview:1 winner:2 discussed:1 interpretation:1 approximates:1 refer:5 surround:2 enjoyed:1 outlined:1 consistency:1 moving:5 longer:1 inhibition:3 etc:1 recent:2 eyetracker:2 thresh:3 exclusion:1 driven:1 periphery:1 continue:1 accomplished:1 devise:1 seen:1 minimum:1 additional:1 contingent:2 impose:1 determine:1 signal:12 multiple:1 desirable:1 exceeds:2 technical:1 match:4 calculation:1 believed:1 long:1 offer:1 divided:1 basic:2 vision:5 kernel:1 normalization:1 pyramid:1 achieved:1 preserved:1 affecting:1 unrealistically:2 remarkably:1 addition:4 decreased:1 interval:1 source:1 scanpaths:3 unlike:2 pass:1 subject:2 tend:3 sent:1 flow:3 seem:1 revealed:1 fit:1 psychology:3 xichen:1 regarding:2 absent:1 shift:2 whether:2 assist:1 york:1 generally:1 useful:1 clear:1 netherlands:1 transforms:1 struggled:1 generate:5 wanders:1 neuroscience:1 arising:1 per:1 klein:1 key:1 salient:4 threshold:15 nevertheless:1 falling:1 changing:1 prevent:1 neither:2 thresholded:4 timestep:1 button:1 excludes:1 sum:2 colliculus:1 angle:2 almost:1 entirely:2 bound:3 followed:1 display:2 activity:3 strength:1 constraint:6 scene:10 performing:1 injection:1 marking:1 according:2 combination:1 smaller:2 terminates:1 psp:1 remain:1 em:1 wta:2 biologically:6 making:3 happens:1 bing:1 eventually:1 mechanism:3 needed:2 end:1 available:1 operation:2 gaussians:2 opponent:1 away:1 occurrence:1 alternative:1 existence:1 neuroanatomical:1 top:4 assumes:1 cf:1 tno:3 giving:1 build:1 move:3 question:3 added:1 receptive:1 saccadic:3 strategy:1 conceivable:1 lends:1 fovea:7 distance:10 separate:1 simulated:7 capacity:1 topic:1 collected:1 extent:1 reason:1 toward:2 assuming:1 code:4 navalpakkam:2 relationship:1 ratio:1 demonstration:1 acquire:2 negative:1 rise:1 allowing:1 upper:2 observation:1 revised:1 neuron:1 sm:20 enabling:1 extended:1 neurobiology:1 arbitrary:1 intensity:2 introduced:1 namely:1 required:2 specified:1 dog:1 accepts:1 nip:1 brook:2 gelade:1 able:3 suggested:4 hayhoe:3 below:1 perception:2 dimitris:1 pattern:5 oculomotor:1 max:1 memory:3 belief:1 shifting:1 greatest:1 event:1 unrealistic:1 predicting:1 improve:1 eye:38 created:2 negativity:3 categorical:1 extract:1 review:3 literature:5 geometric:1 contributing:1 relative:2 generation:2 limitation:3 h2:2 degree:2 offered:1 consistent:4 cd:1 course:1 supported:2 salience:2 guide:4 bias:1 pulled:1 institute:1 fall:1 template:1 taking:1 characterizing:1 bulletin:2 world:2 toet:1 made:4 far:1 approximate:1 contradict:1 keep:1 deg:2 global:2 correlating:1 fixated:7 assumed:1 spatio:1 alternatively:1 preattentively:1 search:50 table:5 channel:6 ballard:3 nature:3 did:1 sp:2 incrementing:1 noise:2 body:1 neuronal:4 referred:1 representative:1 ny:1 position:1 guiding:4 sf:6 clamped:1 third:2 weighting:10 down:4 remained:1 specific:2 evidence:5 false:2 adding:3 effectively:1 perceptually:1 chen:1 simply:1 likely:1 explore:1 visual:26 prevents:1 expressed:2 doubling:1 tsm:5 corresponds:1 abbreviation:1 absence:1 change:2 included:1 determined:4 specifically:1 reducing:1 averaging:6 miss:7 total:1 tendency:1 xin:1 experimental:3 indicating:3 oise:2 bush:1 dept:2 tested:2 |
1,988 | 2,806 | The Forgetron:
A Kernel-Based Perceptron on a Fixed Budget
Ofer Dekel Shai Shalev-Shwartz Yoram Singer
School of Computer Science & Engineering
The Hebrew University, Jerusalem 91904, Israel
{oferd,shais,singer}@cs.huji.ac.il
Abstract
The Perceptron algorithm, despite its simplicity, often performs well on
online classification tasks. The Perceptron becomes especially effective
when it is used in conjunction with kernels. However, a common difficulty encountered when implementing kernel-based online algorithms
is the amount of memory required to store the online hypothesis, which
may grow unboundedly. In this paper we present and analyze the Forgetron algorithm for kernel-based online learning on a fixed memory
budget. To our knowledge, this is the first online learning algorithm
which, on one hand, maintains a strict limit on the number of examples
it stores while, on the other hand, entertains a relative mistake bound.
In addition to the formal results, we also present experiments with real
datasets which underscore the merits of our approach.
1 Introduction
The introduction of the Support Vector Machine (SVM) [8] sparked a widespread interest
in kernel methods as a means of solving (binary) classification problems. Although SVM
was initially stated as a batch-learning technique, it significantly influenced the development of kernel methods in the online-learning setting. Online classification algorithms that
can incorporate kernels include the Perceptron [6], ROMMA [5], ALMA [3], NORMA [4],
Ballseptron [7], and the Passive-Aggressive family of algorithms [1]. Each of these algorithms observes examples in a sequence of rounds, and constructs its classification function
incrementally, by storing a subset of the observed examples in its internal memory. The
classification function is then defined by a kernel-dependent combination of the stored examples. This set of stored examples is the online equivalent of the support set of SVMs,
however in contrast to the support, it continually changes as learning progresses. In this
paper, we call this set the active set, as it includes those examples that actively define the
current classifier. Typically, an example is added to the active set every time the online algorithm makes a prediction mistake, or when its confidence in a prediction is inadequately
low. A rapid growth of the active set can lead to significant computational difficulties. Naturally, since computing devices have bounded memory resources, there is the danger that
an online algorithm would require more memory than is physically available. This problem
becomes especially eminent in cases where the online algorithm is implemented as part of
a specialized hardware system with a small memory, such as a mobile telephone or an au-
tonomous robot. Moreover, an excessively large active set can lead to unacceptably long
running times, as the time-complexity of each online round scales linearly with the size of
the active set.
Crammer, Kandola, and Singer [2] first addressed this problem by describing an online
kernel-based modification of the Perceptron algorithm in which the active set does not exceed a predefined budget. Their algorithm removes redundant examples from the active set
so as to make the best use of the limited memory resource. Weston, Bordes and Bottou [9]
followed with their own online kernel machine on a budget. Both techniques work relatively well in practice, however they both lack a theoretical guarantee on their prediction
accuracy. In this paper we present the Forgetron algorithm for online kernel-based classification. To the best of our knowledge, the Forgetron is the first online algorithm with a
fixed memory budget which also entertains a formal worst-case mistake bound. We name
our algorithm the Forgetron since its update builds on that of the Perceptron and since it
gradually forgets active examples as learning progresses.
This paper is organized as follows. In Sec. 2 we begin with a more formal presentation of
our problem and discuss some difficulties in proving mistake bounds for kernel-methods
on a budget. In Sec. 3 we present an algorithmic framework for online prediction with a
predefined budget of active examples. Then in Sec. 4 we derive a concrete algorithm within
this framework and analyze its performance. Formal proofs of our claims are omitted due
to the lack of space. Finally, we present an empirical evaluation of our algorithm in Sec. 5.
2 Problem Setting
Online learning is performed in a sequence of consecutive rounds. On round t the online
algorithm observes an instance xt , which is drawn from some predefined instance domain
X . The algorithm predicts the binary label associated with that instance and is then provided with the correct label yt ? {?1, +1}. At this point, the algorithm may use the
instance-label pair (xt , yt ) to improve its prediction mechanism. The goal of the algorithm
is to correctly predict as many labels as possible.
The predictions of the online algorithm are determined by a hypothesis which is stored
in its internal memory and is updated from round to round. We denote the hypothesis
used on round t by ft . Our focus in this paper is on margin based hypotheses, namely,
ft is a function from X to R where sign(ft (xt )) constitutes the actual binary prediction
and |ft (xt )| is the confidence in this prediction. The term yf (x) is called the margin
of the prediction and is positive whenever y and sign(f (x)) agree. We can evaluate the
performance of a hypothesis on a given example (x, y) in one of two ways. First, we can
check whether the hypothesis makes a prediction mistake, namely determine whether y =
sign(f (x)) or not. Throughout this paper, we use M to denote the number of prediction
mistakes made by an online algorithm on a sequence of examples (x1 , y1 ), . . . , (xT , yT ).
The second way we evaluate the predictions of a hypothesis is by using the hinge-loss
function, defined as,
0
if yf (x) ? 1
? f ; (x, y) =
.
(1)
1 ? yf (x) otherwise
The hinge-loss penalizes a hypothesis for any margin less than 1. Additionally, if y 6=
sign(f (x)) then ?(f, (x, y)) ? 1 and therefore the cumulative hinge-loss suffered over a
sequence of examples upper bounds M . The algorithms discussed in this paper use kernelbased hypotheses that are defined with respect to a kernel operator K : X ? X ? R which
adheres to Mercer?s positivity conditions [8]. A kernel-based hypothesis takes the form,
f (x) =
k
X
i=1
?i K(xi , x) ,
(2)
where x1 , . . . , xk are members of X and ?1 , . . . , ?k are real weights. To facilitate the
derivation of our algorithms and their analysis, we associate a reproducing kernel Hilbert
space (RKHS) with K in the standard way common to all kernel methods. Formally,
let HK be the closure of the set of all hypotheses of the form given in Eq. (2). For
Pk
Pl
any two functions, f (x) =
i=1 ?i K(xi , x) and g(x) =
j=1 ?j K(zj , x), define
Pk Pl
the inner product between them to be, hf, gi = i=1 j=1 ?i ?j K(xi , zj ). This innerproduct naturally induces a norm defined by kf k = hf, f i1/2 and a metric kf ? gk =
(hf, f i ? 2hf, gi + hg, gi)1/2 . These definitions play an important role in the analysis of
our algorithms. Online kernel methods typically restrict themselves to hypotheses that are
defined by some subset of the examples observedPon previous rounds. That is, the hypothesis used on round t takes the form, ft (x) = i?It ?i K(xi , x), where It is a subset
of {1, . . . , (t-1)} and xi is the example observed by the algorithm on round i. As stated
above, It is called the active set, and we say that example xi is active on round t if i ? It .
Perhaps the most well known online algorithm for binary classification is the Perceptron [6]. Stated in the form
Pof a kernel method, the hypotheses generated by the Perceptron
take the form ft (x) =
i?It yi K(xi , x). Namely, the weight assigned to each active
example is either +1 or ?1, depending on the label of that example. The Perceptron initializes I1 to be the empty set, which implicitly sets f1 to be the zero function. It then
updates its hypothesis only on rounds where a prediction mistake is made. Concretely, on
round t, if ft (xt ) 6= yt then the index t is inserted into the active set. As a consequence, the
size of the active set on round t equals the number of prediction mistakes made on previous
rounds. A relative mistake bound can be proven for the Perceptron algorithm. The bound
holds for any sequence of instance-label pairs, and compares the number of mistakes made
by the Perceptron with the cumulative hinge-loss of any fixed hypothesis g ? HK , even
one defined with prior knowledge of the sequence.
Theorem 1. Let K be a Mercer kernel and let (x1 , y1 ), . . . , (xT , yT ) be a sequence of
examples such that K(xt , xt ) ? 1 for all t. Let g be an arbitrary function in HK and
define ??t = ? g; (xt , yt ) . Then the number of prediction mistakes made by the Perceptron
P
on this sequence is bounded by, M ? kgk2 + 2 Tt=1 ??t .
Although the Perceptron is guaranteed to be competitive with any fixed hypothesis g ?
HK , the fact that its active set can grow without a bound poses a serious computational
problem. In fact, this problem is common to most kernel-based online methods that do not
explicitly monitor the size of It .
As discussed above, our goal is to derive and analyze an online prediction algorithm which
resolves these problems by enforcing a fixed bound on the size of the active set. Formally,
let B be a positive integer, which we refer to as the budget parameter. We would like to
devise an algorithm which enforces the constraint |It | ? B on every round t. Furthermore,
we would like to prove a relative mistake bound for this algorithm, analogous to the bound
stated in Thm. 1. Regretfully, this goal turns out to be impossible without making additional
assumptions. We show this inherent limitation by presenting a simple counterexample
which applies to any online algorithm which uses a prediction function of the form given
in Eq. (2), and for which |It | ? B for all t. In this example, we show a hypothesis g ? HK
and an arbitrarily long sequence of examples such that the algorithm makes a prediction
mistake on every single round whereas g suffers no loss at all. We choose the instance space
X to be the set of B +1 standard unit vectors in RB+1 , that is X = {ei }B+1
i=1 where ei is the
vector with 1 in its i?th coordinate and zeros elsewhere. K is set to be the standard innerproduct in RB+1 , that is K(x, x? ) = hx, x? i. Now for every t, ft is a linear combination
of at most B vectors from X . Since |X | = B + 1, there exists a vector xt ? X which
is currently not in the active set. Furthermore, xt is orthogonal to all of the active vectors
and therefore ft (xt ) = 0. Assume without loss of generality that the online algorithm we
are using predicts yt to be ?1 when ft (x) = 0. If on every round we were to present
the online algorithm with the example (xt , +1) then the online algorithm would make a
PB+1
prediction mistake on every round. On the other hand, the hypothesis g? = i=1 ei is a
member of HK and attains a zero hinge-loss on every round. We have found a sequence of
examples and a fixed hypothesis (which is indeed defined by more than B vectors from X )
that attains a cumulative loss of zero on this sequence, while the number of mistakes made
by the online algorithm equals the number of rounds. Clearly, a theorem along the lines of
Thm. 1 cannot be proven.
One way to resolve this problem is to limit the set of hypotheses we compete with to a subset of HK , which would naturally exclude g?. In this paper, we limit the set of competitors
to hypotheses with small norms. Formally, we wish to devise an online algorithm which is
competitive with every hypothesis g ? HK for which kgk ? U , for some constant U . Our
counterexample
indicates that we cannot prove a relative mistake bound with U set to at
?
least B + 1, since that was the norm of g? in our counterexample. In this paper we come
close to this upper bound by
pproving that our algorithms can compete with any hypothesis
with a norm bounded by 41 (B + 1)/ log(B + 1).
3 A Perceptron with ?Shrinking? and ?Removal? Steps
The Perceptron algorithm will serve as our starting point. Recall that whenever the Perceptron makes a prediction mistake, it updates its hypothesis by adding the element t to
It . Thus, on any given round, the size of its active set equals the number of prediction
mistakes it has made so far. This implies that the Perceptron may violate the budget constraint |It | ? B. We can solve this problem by removing an example from the active set
whenever its size exceeds B. One simple strategy is to remove the oldest example in the
active set whenever |It | > B. Let t be a round on which the Perceptron makes a prediction mistake. We apply the following two step update. First, we perform the Perceptron?s
update by adding t to It . Let It? = It ? {t} denote the resulting active set. If |It? | ? B
we are done and we set It+1 = It? . Otherwise, we apply a removal step by finding the
oldest example in the active set, rt = min It? , and setting It+1 = It? \ {rt }. The resulting
algorithm is a simple modification of the kernel Perceptron, which conforms with a fixed
budget constraint. While we are unable to prove a mistake bound for this algorithm, it is
nonetheless an important milestone on the path to an algorithm with a fixed budget and a
formal mistake bound.
The removal of the oldest active example from It may significantly change the hypothesis
and effect its accuracy. One way to overcome this obstacle is to reduce the weight of old
examples in the definition of the current hypothesis. By controlling the weight of the oldest
active example, we can guarantee that the removal step will not significantly effect the
accuracy of our predictions. More formally, we redefine our hypothesis to be,
X
ft =
?i,t yi K(xi , ?) ,
i?It
where each ?i,t is a weight in (0, 1]. Clearly, the effect of removing rt from It depends on
the magnitude of ?rt ,t .
Using the ideas discussed above, we are now ready to outline the Forgetron algorithm. The
Forgetron initializes I1 to be the empty set, which implicitly sets f1 to be the zero function.
On round t, if a prediction mistake occurs, a three step update is performed. The first step
is the standard Perceptron update, namely, the index t is inserted into the active set and the
weight ?t,t is set to be 1. Let It? denote the active set which results from this update, and
let ft? denote the resulting hypothesis, ft? (x) = ft (x) + yt K(xt , x). The second step of the
update is a shrinking step in which we scale f ? by a coefficient ?t ? (0, 1]. The value of
?t is intentionally left unspecified for now. Let ft?? denote the resulting hypothesis, that is,
ft?? = ?t ft? . Setting ?i,t+1 = ?t ?i,t for all i ? It? , we can write,
X
ft?? (x) =
?i,t+1 yi K(xi , x) .
i?It?
The third and last step of the update is the removal step discussed above. That is, if the budget constraint is violated and |It? | > B then It+1 is set to be It? \ {rt } where rt = min It? .
Otherwise, It+1 simply equals It? . The recursive Q
definition of the weight ?i,t can be unraveled to give the following explicit form, ?i,t = j?It?1 ? j?i ?j . If the shrinking coefficients ?t are sufficiently small, then the example weights ?i,t decrease rapidly with t, and
particularly the weight of the oldest active example can be made arbitrarily small. Thus, if
?t is small enough, then the removal step is guaranteed not to cause any significant damage.
Alas, aggressively shrinking the online hypothesis with every update might itself degrade
the performance of the online hypothesis and therefore ?t should not be set too small. The
delicate balance between safe removal of the oldest example and over-aggressive scaling is
our main challenge. To formalize this tradeoff, we begin with the mistake bound in Thm. 1
and investigate how it is effected by the shrinking and removal steps.
We focus first on the removal step. Let J denote the set of rounds on which the Forgetron
makes a prediction mistake and define the function,
?(? , ? , ?) = (? ?)2 + 2 ? ?(1 ? ? ?) .
Let t ? J be a round on which |It | = B. On this round, example rt is removed from the
active set. Let ?t = yrt ft? (xrt ) be the signed margin attained by ft? on the active example
being removed. Finally, we abbreviate,
?(?rt ,t , ?t , ?t ) if t ? J ? |It | = B
.
?t =
0
otherwise
Lemma 1 below states that removing example rt from the active set on round t increases the
mistake bound by ?t . As expected, ?t decreases with the weight of the removed example,
?rt ,t+1 . In addition, it is clear from the definition of ?t that ?t also plays a key role in
determining whether xrt can be safely removed from the active set. We note in passing
that [2] used a heuristic criterion similar to ?t to dynamically choose which active example
to remove on each online round.
Turning to the shrinking step, for every t ? J we define,
?
if kft+1 k ? U
? 1
?
if
kft? k ? U ? kft+1 k < U
t
?t =
? ?t kft? k
if kft? k > U ? kft+1 k < U
U
.
Lemma 1 below also states that applying the shrinking step on round t increases the mistake
bound by U 2 log(1/?t ). Note that if kft+1 k ? U then ?t = 1 and the shrinking step on
round t has no effect on our mistake bound. Intuitively, this is due to the fact that, in
this case, the shrinking step does not make the norm of ft+1 smaller than the norm of our
competitor, g.
Lemma 1. Let (x1 , y1 ), . . . , (xT , yT ) be a sequence of examples such that K(xt , xt ) ? 1
for all t and assume that this sequence is presented to the Forgetron with a budget constraint
B. Let g be a function in HK for which kgk ? U , and define ??t = ? g; (xt , yt ) . Then,
!
!
T
X
X
X
??t +
?t + U 2
log (1/?t ) .
M ? kgk2 + 2
t=1
t?J
t?J
The first term in the bound of Lemma 1 is identical to the mistake bound of the standard
Perceptron, given in Thm. 1. The second term is the consequence of the removal and
shrinking steps. If we set the shrinking coefficients in such a way that the second term is at
P
most M
1 reduces to M ? kgk2 + 2 t ??t + M
2 , then the bound in Lemma
2 . This can be
P
restated as M ? 2kgk2 + 4 t ??t , which is twice the bound of the Perceptron algorithm.
The next lemma states sufficient conditions on ?t under which the second term in Lemma 1
is indeed upper bounded by M
2 .
Lemma 2. Assume that the conditions of Lemma 1 hold and that B ? 83. If the shrinking
coefficients ?t are chosen such that,
X
t?J
?t ?
15
M
32
then the following holds,
P
X
and
t?J
t?J
?t + U 2
P
log (1/?t ) ?
t?J
log(B + 1)
M ,
2(B + 1)
log (1/?t ) ?
M
2
.
In the next section, we define the specific mechanism used by the Forgetron algorithm to
choose the shrinking coefficients ?t . Then, we conclude our analysis by arguing that this
choice satisfies the sufficient conditions stated in Lemma 2, and obtain a mistake bound as
described above.
4 The Forgetron Algorithm
We are now ready to define the specific choice of ?t used by the Forgetron algorithm.
On each round, the Forgetron chooses ?t to be the maximal value in (0, 1] for which the
damage caused by the removal step is still manageable. To clarify our construction, define
Jt = {i ? J : i ? t} and Mt = |Jt |. In words, Jt is the set of rounds on which the
algorithm made a mistake up until round t, and Mt is the size of this set. We can now
rewrite the first condition in Lemma 2 as,
X
t?JT
?t ?
15
MT .
32
(3)
Instead of the above condition, the Forgetron enforces the following stronger condition,
?i ? {1, . . . , T },
X
t?Ji
?t ?
15
Mi .
32
(4)
P
This is done as follows. Define, Qi = t?Ji?1 ?t . Let i denote a round on which the
algorithm makes a prediction mistake and on which an example must be removed from
15
the active set. The i?th constraint in Eq. (4) can be rewritten as ?i + Qi ? 32
Mi . The
Forgetron sets
?
to
be
the
maximal
value
in
(0,
1]
for
which
this
constraint
holds,
namely,
i
?i = max ? ? (0, 1] : ?(?ri ,i , ? , ?i ) + Qi ? 15
M
.
Note
that
Q
does
not
depend
i
i
32
on ? and that ?(?ri ,i , ?, ?i ) is a quadratic expression in ?. Therefore, the value of ?i can
be found analytically. The pseudo-code of the Forgetron algorithm is given in Fig. 1.
Having described our algorithm, we now turn to its analysis. To prove a mistake bound
it suffices to show that the two conditions stated in Lemma 2 hold. The first condition of
the lemma follows immediately from the definition of ?t . Using strong induction on the
size of J, we can show that the second condition holds as well. Using these two facts, the
following theorem follows as a direct corollary of Lemma 1 and Lemma 2.
I NPUT: Mercer kernel K(?, ?) ; budget parameter B > 0
I NITIALIZE : I1 = ? ; f1 ? 0 ; Q1 = 0 ; M0 = 0
For t = 1, 2, . . .
receive instance xt ; predict label: sign(ft (xt ))
receive correct label yt
If yt ft (xt ) > 0
set It+1 = It , Qt+1 = Qt , Mt = Mt?1 , and ?i ? It set ?i,t+1 = ?i,t
Else
set Mt = Mt?1 + 1
(1) set It? = It ? {t}
If |It? | ? B
set It+1 = It? , Qt+1 = Qt , ?t,t = 1, and ?i ? It+1 set ?i,t+1 = ?i,t
Else
(2) define rt = min It
15
choose ?t = max{? ? (0, 1] : ?(?rt ,t , ? , ?t ) + Qt ? 32
Mt }
?
set ?t,t = 1 and ?i ? It set ?i,t+1 = ?t ?i,t
set Qt+1 = Qt + ?t
(3) set It+1 = It? \ {rt }
P
define ft+1 = i?It+1 ?i,t+1 yi K(xi , ?)
Figure 1: The Forgetron algorithm.
Theorem 2. Let (x1 , y1 ), . . . , (xT , yT ) be a sequence of examples such that K(xt , xt ) ? 1
for all t. Assume that this sequence is presented to the Forgetron algorithm from Fig. 1 with
ap
budget parameter B ? 83. Let g be a function in HK for which kgk ? U , where U =
1
(B + 1)/ log(B + 1), and define ??t = ? g; (xt , yt ) . Then, the number of prediction
4
mistakes made by the Forgetron on this sequence is at most,
M ? 2 kgk2 + 4
T
X
??t
t=1
5 Experiments and Discussion
In this section we present preliminary experimental results which demonstrate the merits of the Forgetron algorithm. We compared the performance of the Forgetron with the
method described in [2], which we abbreviate by CKS. When the CKS algorithm exceeds
its budget, it removes the active example whose margin would be the largest after the removal. Our experiment was performed with two standard datasets: the MNIST dataset,
which consists of 60,000 training examples, and the census-income (adult) dataset, with
200,000 examples. The labels of the MNIST dataset are the 10 digit classes, while the setting we consider in this paper is that of binary classification. We therefore generated binary
problems
by splitting the 10 labels into two sets of equal size in all possible ways, totaling
10
/2
=
126 classification problems. For each budget value, we ran the two algorithms on
5
all 126 binary problems and averaged the results. The labels in the census-income dataset
are already binary, so we ran the two algorithms on 10 different permutations of the examples and averaged the results. Both algorithms used a fifth degree non-homogeneous
polynomial kernel. The results of these experiments are summarized in Fig. 2. The accuracy of the standard Perceptron (which does not depend on B) is marked in each plot
0.3
Forgetron
CKS
average error
average error
0.2
0.15
0.1
Forgetron
CKS
0.3
0.25
0.25
0.2
0.15
0.1
0.05
0.05
1000
2000
3000
4000
budget size ? B
5000
6000
200
400
600
800 1000 1200 1400 1600 1800
budget size ? B
Figure 2: The error of different budget algorithms as a function of the budget size B on the censusincome (adult) dataset (left) and on the MNIST dataset (right). The Perceptron?s active set reaches
a size of 14,626 for census-income and 1,886 for MNIST. The Perceptron?s error is marked with a
horizontal dashed black line.
using a horizontal dashed black line. Note that the Forgetron outperforms CKS on both
datasets, especially when the value of B is small. In fact, on the census-income dataset, the
Forgetron achieves almost the same performance as the Perceptron with only a fifth of the
active examples. In contrast to the Forgetron, which performs well on both datasets, the
CKS algorithm performs rather poorly on the census-income dataset. This can be partly
attributed to the different level of difficulty of the two classification tasks. It turns out that
the performance of CKS deteriorates as the classification task becomes more difficult. In
contrast, the Forgetron seems to perform well on both easy and difficult classification tasks.
In this paper we described the Forgetron algorithm which is a kernel-based online learning
algorithm with a fixed memory budget. We proved that the Forgetron
is competitive with
p
any hypothesis whose norm is upper bounded by U = 14 (B + 1)/ log(B + 1). We
further argued that no algorithm with a?budget of B active examples can be competitive
with every hypothesis whose
? norm is B + 1, on every input sequence. Bridging the
small gap between U and B + 1 remains an open problem. The analysis presented in
this paper can be used to derive a family of online algorithms of which the Forgetron is
only one special case. This family of algorithms, as well as complete proofs of our formal
claims and extensive experiments, will be presented in a long version of this paper.
References
[1] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive
aggressive algorithms. Technical report, The Hebrew University, 2005.
[2] K. Crammer, J. Kandola, and Y. Singer. Online classification on a budget. NIPS, 2003.
[3] C. Gentile. A new approximate maximal margin classification algorithm. JMLR, 2001.
[4] J. Kivinen, A. J. Smola, and R. C. Williamson. Online learning with kernels. IEEE
Transactions on Signal Processing, 52(8):2165?2176, 2002.
[5] Y. Li and P. M. Long. The relaxed online maximum margin algorithm. NIPS, 1999.
[6] F. Rosenblatt. The Perceptron: A probabilistic model for information storage and
organization in the brain. Psychological Review, 65:386?407, 1958.
[7] S. Shalev-Shwartz and Y. Singer. A new perspective on an old perceptron algorithm.
COLT, 2005.
[8] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.
[9] J. Weston, A. Bordes, and L. Bottou. Online (and offline) on an even tighter budget.
AISTATS, 2005.
| 2806 |@word kgk:3 version:1 manageable:1 polynomial:1 norm:8 stronger:1 seems:1 dekel:2 open:1 closure:1 q1:1 rkhs:1 ala:1 outperforms:1 current:2 kft:7 must:1 remove:4 plot:1 update:11 device:1 unacceptably:1 xk:1 oldest:6 eminent:1 along:1 direct:1 prove:4 consists:1 redefine:1 expected:1 indeed:2 rapid:1 themselves:1 brain:1 resolve:2 actual:1 becomes:3 provided:1 begin:2 bounded:5 moreover:1 pof:1 israel:1 unspecified:1 finding:1 guarantee:2 safely:1 pseudo:1 every:12 growth:1 classifier:1 milestone:1 unit:1 continually:1 positive:2 engineering:1 limit:3 mistake:33 consequence:2 despite:1 path:1 ap:1 might:1 signed:1 twice:1 au:1 black:2 dynamically:1 limited:1 averaged:2 enforces:2 arguing:1 practice:1 recursive:1 digit:1 danger:1 empirical:1 significantly:3 confidence:2 word:1 cannot:2 close:1 operator:1 storage:1 impossible:1 applying:1 equivalent:1 yt:14 jerusalem:1 starting:1 restated:1 simplicity:1 splitting:1 immediately:1 nitialize:1 proving:1 coordinate:1 analogous:1 updated:1 controlling:1 play:2 construction:1 homogeneous:1 us:1 hypothesis:34 associate:1 element:1 particularly:1 predicts:2 observed:2 ft:24 role:2 inserted:2 worst:1 decrease:2 removed:5 observes:2 ran:2 complexity:1 depend:2 solving:1 rewrite:1 serve:1 derivation:1 effective:1 shalev:3 whose:3 heuristic:1 solve:1 say:1 otherwise:4 gi:3 itself:1 online:41 inadequately:1 sequence:17 product:1 maximal:3 rapidly:1 poorly:1 empty:2 unboundedly:1 derive:3 depending:1 ac:1 pose:1 qt:7 school:1 progress:2 eq:3 strong:1 implemented:1 c:1 come:1 implies:1 safe:1 correct:2 norma:1 implementing:1 require:1 argued:1 hx:1 f1:3 suffices:1 preliminary:1 tighter:1 pl:2 clarify:1 hold:6 sufficiently:1 algorithmic:1 predict:2 claim:2 unraveled:1 m0:1 achieves:1 consecutive:1 omitted:1 label:11 currently:1 largest:1 clearly:2 rather:1 mobile:1 totaling:1 conjunction:1 corollary:1 focus:2 check:1 indicates:1 underscore:1 contrast:3 hk:10 attains:2 dependent:1 typically:2 initially:1 i1:4 classification:14 colt:1 development:1 special:1 equal:5 construct:1 having:1 identical:1 constitutes:1 report:1 serious:1 inherent:1 kandola:2 delicate:1 organization:1 interest:1 investigate:1 evaluation:1 hg:1 predefined:3 conforms:1 orthogonal:1 old:2 penalizes:1 theoretical:1 psychological:1 instance:7 obstacle:1 subset:4 too:1 stored:3 chooses:1 huji:1 probabilistic:1 concrete:1 choose:4 positivity:1 actively:1 li:1 aggressive:3 exclude:1 sec:4 summarized:1 includes:1 coefficient:5 explicitly:1 caused:1 depends:1 performed:3 analyze:3 competitive:4 hf:4 maintains:1 effected:1 shai:1 kgk2:5 il:1 accuracy:4 influenced:1 suffers:1 reach:1 whenever:4 definition:5 competitor:2 nonetheless:1 intentionally:1 naturally:3 proof:2 associated:1 mi:2 attributed:1 dataset:8 proved:1 recall:1 knowledge:3 organized:1 hilbert:1 formalize:1 cks:7 forgetron:30 attained:1 done:2 generality:1 furthermore:2 smola:1 until:1 hand:3 horizontal:2 ei:3 alma:1 lack:2 incrementally:1 widespread:1 yf:3 perhaps:1 facilitate:1 effect:4 excessively:1 name:1 analytically:1 assigned:1 aggressively:1 round:35 criterion:1 presenting:1 outline:1 tt:1 demonstrate:1 complete:1 performs:3 passive:2 common:3 specialized:1 mt:8 ji:2 discussed:4 significant:2 refer:1 counterexample:3 robot:1 own:1 perspective:1 store:2 binary:8 arbitrarily:2 yi:4 devise:2 additional:1 gentile:1 relaxed:1 determine:1 redundant:1 dashed:2 signal:1 violate:1 reduces:1 exceeds:2 technical:1 long:4 qi:3 prediction:27 metric:1 physically:1 kernel:25 receive:2 addition:2 whereas:1 addressed:1 else:2 grow:2 suffered:1 romma:1 strict:1 member:2 call:1 integer:1 exceed:1 enough:1 easy:1 restrict:1 inner:1 reduce:1 idea:1 tradeoff:1 whether:3 expression:1 bridging:1 passing:1 cause:1 clear:1 amount:1 hardware:1 svms:1 induces:1 zj:2 sign:5 deteriorates:1 correctly:1 rb:2 rosenblatt:1 write:1 key:1 pb:1 monitor:1 drawn:1 compete:2 family:3 throughout:1 almost:1 scaling:1 bound:24 followed:1 guaranteed:2 quadratic:1 encountered:1 sparked:1 constraint:7 ri:2 min:3 relatively:1 combination:2 smaller:1 modification:2 making:1 intuitively:1 gradually:1 census:5 resource:2 agree:1 remains:1 describing:1 discus:1 mechanism:2 turn:3 singer:6 merit:2 ofer:1 available:1 rewritten:1 apply:2 batch:1 running:1 include:1 hinge:5 yoram:1 especially:3 build:1 initializes:2 added:1 nput:1 occurs:1 already:1 strategy:1 damage:2 rt:13 shais:1 unable:1 degrade:1 enforcing:1 induction:1 xrt:2 code:1 index:2 balance:1 hebrew:2 difficult:2 gk:1 stated:6 perform:2 upper:4 datasets:4 y1:4 reproducing:1 arbitrary:1 thm:4 pair:2 required:1 namely:5 extensive:1 nip:2 adult:2 below:2 challenge:1 max:2 memory:10 difficulty:4 turning:1 abbreviate:2 kivinen:1 innerproduct:2 improve:1 ready:2 entertains:2 prior:1 review:1 removal:12 kf:2 regretfully:1 relative:4 determining:1 loss:8 permutation:1 limitation:1 proven:2 degree:1 sufficient:2 mercer:3 storing:1 bordes:2 elsewhere:1 last:1 offline:1 formal:6 perceptron:29 fifth:2 overcome:1 cumulative:3 concretely:1 made:10 far:1 income:5 transaction:1 approximate:1 implicitly:2 active:38 conclude:1 xi:10 shwartz:3 additionally:1 adheres:1 williamson:1 bottou:2 domain:1 aistats:1 pk:2 main:1 linearly:1 oferd:1 x1:5 fig:3 wiley:1 shrinking:13 wish:1 explicit:1 forgets:1 jmlr:1 third:1 theorem:4 removing:3 xt:25 specific:2 jt:4 svm:2 exists:1 mnist:4 vapnik:1 adding:2 yrt:1 keshet:1 magnitude:1 budget:25 margin:7 gap:1 simply:1 applies:1 satisfies:1 weston:2 goal:3 presentation:1 marked:2 change:2 telephone:1 determined:1 lemma:15 called:2 partly:1 experimental:1 formally:4 internal:2 support:3 kernelbased:1 crammer:3 violated:1 incorporate:1 evaluate:2 |
1,989 | 2,807 | Ideal Observers for Detecting Motion:
Correspondence Noise
Hongjing Lu
Department of Psychology, UCLA
Los Angeles, CA 90095
[email protected]
Alan Yuille
Department of Statistics, UCLA
Los Angeles, CA 90095
[email protected]
Abstract
We derive a Bayesian Ideal Observer (BIO) for detecting motion and
solving the correspondence problem. We obtain Barlow and Tripathy?s
classic model as an approximation. Our psychophysical experiments
show that the trends of human performance are similar to the Bayesian
Ideal, but overall human performance is far worse. We investigate ways
to degrade the Bayesian Ideal but show that even extreme degradations
do not approach human performance. Instead we propose that humans
perform motion tasks using generic, general purpose, models of motion.
We perform more psychophysical experiments which are consistent with
humans using a Slow-and-Smooth model and which rule out an alternative model using Slowness.
1
Introduction
Ideal Observers give fundamental limits for performing visual tasks (somewhat similar to
Shannon?s limits on information transfer). They give benchmarks against which to evaluate
human performance. This enables us to determine objectively what visual tasks humans
are good at, and may help point the way to underlying neuronal mechanisms. For a recent
review, see [1].
In an influential paper, Barlow and Tripathy [2] tested the ability of human subjects to detect
dots moving coherently in a background of random dots. They derived an ?ideal observer?
model using techniques from Signal Detection theory [3]. They showed that their model
predicted the trends of the human performance as properties of the stimuli changed, but that
humans performed far worse than their model. They argued that degrading their model,
by lowering the spatial resolution, would give predictions closer to human performance.
Barlow and Tripathy?s model has generated considerable interest, see [4,5,6,7].
We formulate this motion problem in terms of Bayesian Decision Theory and derive a
Bayesian Ideal Observer (BIO) model. We describe why Barlow and Tripathy?s (BT) model
is not fully ideal, show that it can be obtained as an approximation to the BIO, and determine conditions under which it is a good approximation. We perform psychophysical experiments under a range of conditions and show that the trends of human subjects are more
similar to those of the BIO. We investigate whether degrading the Bayesian Ideal enables
us to reach human performance, and conclude that it does not (without implausibly large
deformations). We comment that Barlow and Tripathy?s degradation model is implausible
due to the nature of the approximations used.
Instead we show that a generic motion detection model which uses a slow-and-smooth
assumption about the motion field [8,9] gives similar performance to human subjects under
a range of experimental conditions. A simpler approach using a slowness assumption alone
does not match new experimental data that we present. We conclude that human observers
are not ideal, in the sense that they do not perform inference using the model that the
experimenter has chosen to generate the data, but may instead use a general purpose model
perhaps adapted to the motion statistics of natural images.
2
Bayes Decision Theory and Ideal Observers
We now give the basic elements of Bayes Decision Theory. The input data is D and
we seek to estimate a binary state W (e.g. coherent or incoherent motion, horizontal motion to right or to left). We assume models P (D|W ) and P (W ). We define
a decision
P rule ?(D) and a loss function L(?(I), W ) = 1 ? ??(D),W . The risk is
R(?) = D,W L(?(D), W )P (D|W )P (W ).
Optimal performance is given by the Bayes rule: ?? = arg min R(?). The fundamental
limits are given by Bayes Risk: R? = R(?? ). Bayes risk is the best performance that can
be achieved. It corresponds to ideal performance.
Barlow and Tripathy?s (BT) model does not achieve Bayes risk. This is because they used
simplification to derive it using concepts from Signal Detection theory (SDT). SDT is essentially the application of Bayes Decision Theory to the task of signal detection but, for
historical reasons, SDT restricts itself to a limited class of probability models and is unable
to capture the complexity of the motion problem.
3
Experimental Setup and Correspondence Noise
We now give the details of Barlow and Tripathy?s stimuli, their model, and their experiments. The stimuli consist of two image frames with N dots in each frame. The dots in the
first frame are at random positions. For coherent stimuli, see figure (1), a proportion CN
of dots move coherently left or right horizontally with a fixed translation motion with displacement T . The remaining N (1 ? C) dots in the second frame are generated at random.
For incoherent stimuli, the dots in both frames are generated at random.
Estimating motion for these stimuli requires solving the correspondence problem to match
dots between frames. For coherent motion, the noise dots act as correspondence noise and
make the matching harder, see the rightmost panel in figure (1).
Barlow and Tripathy perform two types of binary forced choice experiments. In detection
experiments, the task is to determine whether the stimuli is coherent or incoherent motion.
For discrimination experiments, the goal is to determine if the motion is to the right or the
left.
The experiments are performed by adjusting the fraction C of coherently moving dots until
the human subject?s performance is at threshold (i.e. 75 percent correct). Barlow
? and
Q?N
Tripathy?s (BT) model gives the proportion of dots at threshold to
be
C
=
1/
?
?
where Q is the size of the image lattice. This is approximately 1/ Q (because N << Q)
and so is independent of the density of dots. Barlow and Tripathy compare the thresholds of
the human subjects with those of their model for a range of experimental conditions which
we will discuss in later sections.
Figure 1: The left three panels show coherent stimuli with N = 20, C = 0.1, N = 20, C =
0.5 and N = 20, C = 1.0 respectively. The closed and open circles denote dots in the first
and second frame respectively. The arrows show the motion of those dots which are moving
coherently. Correspondence noise is illustrated by the far right panel showing that a dot in
the first frame has many candidate matches in the second frame.
4
The Bayesian Ideal Model
We now compute the Bayes rule and Bayes risk by taking into account exactly how the data
is generated. We denote the dot positions in the first and second frame by D = {xi : i =
1, ..., N }, {ya : a = 1, ..., N }. We define correspondence variables Via : Via = 1 if xi ?
ya , Via = 0 otherwise.
The generative model for the data is given by:
X
P (D|Coh, T ) =
P ({ya }|{xi }, {Via }, T )P ({Via })P ({xi }) coherent,
Via
P (D|Incoh)
= P ({ya })P ({xi }), incoherent.
(1)
The prior distributions for the dot positions P ({xi }), P ({ya }) allow all configurations of
)!
the dots to be equally likely. They are therefore of form P ({xi }) = P ({ya }) = (Q?N
Q!
where Q is the number of lattice points. The model P ({ya }|{xi }, {Via }, T ) for coherV
(Q?N )! Q
) ia . We set the priors
ent motion is P ({ya }|{xi }, {Via }, T ) = (Q?CN
i +T
ia (?ya ,xP
)!
P ({Via } to be the uniform distribution. There is a constraint ia Via = CN (since only
CN dots move coherently).
This gives:
P (D|Incoh)
=
P (D|Coh, T )
=
(Q ? N )! (Q ? N )!
,
Q!
Q!
XY
(N ? CN )! (N ? CN )! 2
V
{
} (CN )!
(?ya +T,xi ) ia .
(N )!
(N )!
ia
Via
P
Q
V
?!
These can be simplified further by observing that Via ia (?ya ,xi +T ) ia = (??M
)!M ! ,
where ? is the total number of matches ? i.e. the number of dots in the first frame that have
a corresponding dot at displacement T in the second frame (this includes ?fake? matches
due to change alignment of noise dots in the two frames).
The Bayes rule for performing the tasks are given by testing the log-likelihood ratios: (i)
(D|Incoh)
P (D|Coh,?T )
log PP(D|Coh,T
) for detection (i.e. coherent versus incoherent), and (ii) log P (D|Coh,T ) for
discrimination (i.e. motion to right or to left). For detection, the log-likelihood ratio is a
function of ?. For discrimination, the log-likelihood ratio is a function of the number of
matches to the right ?r and to the left ?l . It is straightforward to calculate the Bayes risk
and determine coherence thresholds.
We can rederive Barlow and Tripathy?s model as an approximation to the Bayesian Ideal.
They make two approximations: (i) they model the distribution of ? as Binomial, (ii) they
use d? . Both approximations are very good near threshold, except for small N . The use of
d? can be justified if P (?|Coh, T ) and P (?|Incoh) are Gaussians with similar variance.
This is true for large N = 1000 and a range of C but not so good for small N = 100, see
figure (2).
0.06
P(?|C)
0.03
0.06
P(?|C)
P(?|N)
0.03
N=200
C=2.5%
P(?|N)
Probability
P(?|N)
0.4
N=100
C=1%
N=1000
C=5%
Probability
N=1000
C=0.5%
Probability
Probability
0.9
0.09
0.09
0.6
P(?|C)
0.3
0.2
P(?|C)
P(?|N)
0
0
30
?
0
0
60
40
?
0
0
80
2
?
0
0
4
?
5
10
15
Figure 2: We plot P (?|Coh, T ) and P (?|Incoh), shown as P (?|C) and P (?|N ) respectively, for a range of N and C. One of Barlow and Tripathy?s two approximations are
justified if the distributions are Gaussian with the same variance. This is true for large N
(left two panels) but fails for small N (right two panels). Note that human thresholds are
roughly 30 times higher than for BIO (the scales on graphs differ).
We computed the coherence threshold for the BIO and the BT models for N = 100 to N =
1000, see the second and fourth panels in figure (3). As described earlier, the BT threshold
is approximately independent of the number N of dots. Our computations showed that the
BIO threshold is also roughly constant except for small N (this is not surprising in light of
figure (2). This motivated psychophysics experiments to determine how humans performed
for small N (this range of dots was not explored in Barlow and Tripathy?s experiments).
All our data points are from 300 trials using QUEST, so errors bars are so small that we do
not include them.
We performed the detection and discrimination tasks with translation motion T = 16 (as
in Barlow and Tripathy). For detection and discrimation, the human subject?s thresholds
showed similar trends to the thresholds for BIO and BT. But human performance at small
N are more consistent with BIO, see figure (3).
0.1
100
1000
Dot Numbers (N)
10000
Coherence Threshold
0.5
1.0
Baysian model
Barlow & Tripathy
0.01
100
1000
Dot Numbers (N)
10000
0.03
BT
HL
RK
Coherence Threshold
0.03
HL
RK
Coherence Threshold
Coherence Threshold
1.0
0.5
0.1
100
1000
Dot Numbers (N)
10000
Baysian model
Barlow & Tripathy
0.01
100
1000
10000
Dot Numbers (N)
Figure 3: The left two panels show detection thresholds ? human subjects (far left) and BIO
and BT thresholds (left). The right two panels show discrimination thresholds ? human
subjects (right) and BIO and BT (far right).
But probably the most striking aspect of figure (3) is how poorly humans perform compared
to the models. The thresholds for BIO are always higher than those for BT, but these
differences are almost negligible compared to the differences with the human subjects. The
experiments also show that the human subject trends differ from the models at large N .
But these are extreme conditions where there are dots on most points on the image lattice.
5
Degradating the Ideal Observer Models
We now degrade the Bayes Ideal model to see if we can obtain human performance. We
consider two mechanisms: (A) Humans do not know the precise value of the motion translation T . (B) Humans have poor spatial uncertainty. We will also combine both mechanisms.
For (A), we model lack of knowledge of the velocity T by summing over different motions.
We generate the stimuli as before
P from P (D|Incoh) or P (D|Coh, T ), but we make the
decision by thresholding: log
T
P (D|Coh,T )P (T )
.
P (D|Incoh)
For (B), we model lack of spatial resolution by replacing P ({ya }|{xi }, {Via }, T ) =
(Q?N )! Q
(Q?N )! Q
ia Via ?ya ,xi +t by P ({ya }|{xi }, {Via }, T ) = (Q?CN )!
ia Via fW (ya , xi + t).
(Q?CN )!
Here W is the width of a spatial window, so that fW (a, b) = 1/W 2 , if |a ? b| <
W ; fW (a, b) = 0, otherwise.
Our calculations, see figure (4), show that neither (A) nor (B) not their combination are
sufficient to account for the poor performance of human subjects. Lack of knowledge
of the correct motion (and consequently summing over several models) does little to degrade performance. Decreasing spatial resolution does degrade performance but even huge
degradations are insufficient to reach human levels. Barlow and Tripathy [2] argue that
they can degrade their model to reach human performance but the degradations are huge
and they occur in conditions (e.g. N = 50 or N = 100) where their model is not a good
approximation to the true Bayesian Ideal Observer.
Coherence Threshold
0.5
0.1
5
9
17
Unknown Velocity
Spatial uncertainty
Lattice separation
Human performance
33
Spatial uncertainty range (pixels)
Figure 4: Comparing the degraded models to human performance. We use a log-log plot
because the differences between humans and model thresholds is very large.
6
Slowness and Slow-and-Smooth
0.5
0.1
100
1000
Dot Numbers (N)
10000
Speed=2
Speed=8
Speed=16
0.5
0.1
100
1000
Dot Numbers (N)
10000
Coherence Threshold
1.0
Speed=2
Speed=8
Speed=16
Coherence Threshold
Coherence Threshold
1.0
1.0
0.5
0.1
2D Nearest Neighbor
1D Nearest Neighbor
Humans
100
1000
Dot Numbers (N)
10000
Coherence Threshold
We now consider an alternative explanation for why human performance differs so greatly
from the Bayesian Ideal Observer. Perhaps human subjects do not use the ideal model
(which is only known to the designer of the experiments) and instead use a general purpose
motion model. We now consider two possible models: (i) a slowness model, and (ii) a slow
and smooth model.
1.0
0.5
Speed=2
Speed=4
Speed=8
Speed=16
Human average
0.1
100
1000
10000
Dot Numbers (N)
Figure 5: The coherence threshold as a function of N for different translation motions T .
From left to right, human subject (HL), human subject (RK), 2DNN (shown for T = 16
only), and 1DNN. In the two right panels we have drawn the average human performance
for comparision.
The slowness model is partly motivated by Ullman?s minimal mapping theory [10] and
partly by the design of practical computer vision tracking systems. This model solves
the correspondence problem by simply matching a dot in the first frame to the closest
dot in the second frame. We consider a 2D nearest neighbour model (2DNN) and a 1D
nearest neighbour model (1DNN), for which the matching is constrained to be in horizontal
directions only. After the motion has been calculated we perform a log-likelihood test
to solve the discrimination and detection tasks. This enables us to calculate coherence
thresholds, see figure (5). Both 1DNN and 2DNN predict that correspondence will be easy
for small translation motions even when the number of dots is very large. This motivates a
new class of experiments where we vary the translation motion.
Our experiments show that 1DNN and 2DNN are poor fits to human performance. Human
performance thresholds are relatively insensitive to the number N of dots and the translation motion T , see the two left panels in figure (5). By contrast, the 1DNN and 2DNN
thresholds are either far lower than humans for small N or far higher at large N with a
transition that depends on T . We conclude that the 1DNN and 2DNN models do not match
human performance.
N=100, C=10%
N=100, C=20%
N=100, C=30%
N=100, C=50%
N=100, C=10%
N=100, C=20%
N=100, C=30%
N=100, C=50%
N=100, C=10%
N=100, C=20%
N=100, C=30%
N=100, C=50%
Figure 6: The motion flows from Slow-and-Smooth for N = 100 as functions of C and
T . From left to right, C = 0.1, C = 0.2, C = 0.3, C = 0.5. From top to bottom,
T = 4, T = 8, T = 16. The closed and open circles denote dots in the first and second
frame respectively. The arrows indicate the motion flow specified by the Slow-and-Smooth
model.
We now consider the Slow-and-Smooth model [8,9] which has been shown to account for
a range of motion phenomena. We use a formulation [8] that was specifically designed for
dealing with the correspondence problem.
This gives a model of form P (V, v|{xi }, {ya }) = (1/Z)e?E[V,v]/Tm , where
E[V, v] =
N X
N
X
i=1 a=1
Via (ya ? xi ? v(xi ))2 + ?||Lv||2 + ?
N
X
Vi0 ,
(2)
i=1
L is an operator that penalizes slow-and-smooth motion and depends on a paramters ?, see
PN
Yuille and Grzywacz for details [8]. We impose the constraint that i=a Via = 1, ?i,
which enforces that each point i in the first frame is either unmatched, if Vi0 = 1, or is
matched to a point a in the second frame.
We implemented this model using
P an EM algorithm to estimate the motion field v(x) that
maximizes P (v|{xi }, {ya }) = V P (V, v|{xi }, {ya }). The parameter settings are Tm =
0.001, ? = 0.5, ? = 0.01, ? = 0.2236. (The size of the units of length are normalized by
the size of the image). The size of ? determines the spatial scale of the interaction between
dots [8]. This parameter settings estimate correct motion directions in the condition that all
dots move coherently, C = 1.0.
The following results, see figure (6), show that for 100 dots (N = 100) the results of the
slow-and-smooth model are similar to those of the human subjects for a range of different
translation motions. Slow-and-Smooth starts giving coherence thresholds between C = 0.2
and C = 0.3 consistent with human performance. Lower thresholds occurred for slower
coherent translations in agreement with human performance.
Slow-and-Smooth also gives thresholds similar to human performance when we alter the
number N of dots, see figure (7). Once again, Slow-and-Smooth starts giving the correct
horizontal motion between c = 0.2 and c = 0.3.
N=50, C=10%
N=50, C=20%
N=50, C=30%
N=50, C=50%
N=100, C=10%
N=100, C=20%
N=100, C=30%
N=100, C=50%
N=1000, C=10%
N=1000, C=20%
N=1000, C=30%
N=1000, C=50%
Figure 7: The motion fields of Slow-and-Smooth for T = 16 as a function of c and N .
From left to right, C = 0.1, C = 0.2, C = 0.3, C = 0.5. From top to bottom, N =
50, N = 100, N = 1000. Same conventions as for previous figure.
7
Summary
We defined a Bayes Ideal Observer (BIO) for correspondence noise and showed that Barlow and Tripathy?s (BT) model [2] can be obtained as an approximation. We performed
psychophysical experiments which showed that the trends of human performance were
more similar to those of BIO (when it differed from BT). We attempted to account for
human?s poor performance (compared to BIO) by allowing for degradations of the model
such as poor spatial resolution and uncertainty about the precise translation velocity. We
concluded that these degradation had to be implausibly large to account for the poorness
of human performance. We noted that Barlow and Tripathy?s degradation model [2] takes
them into a regime where their model is a bad approximation to the BIO. Instead, we investigated the possibility that human observers perform these motion tasks using generic
probability models for motion possibly adapted to the statistics of motion in the natural
world. Further psychophysical experiments showed that human performance was inconsistent with a model than prefers slow motion. But human performance was consistent with
the Slow-and-Smooth model [8,9].
We conclude with two metapoints. Firstly, it is possible to design ideal observer models for
complex stimuli using techniques from Bayes decision theory. There is no need to restrict
oneself to the traditional models described in classic signal detection books such as Green
and Swets [3]. Secondly, human performance at visual tasks may be based on generic
models, such as Slow-and-Smooth, rather than the ideal models for the experimental tasks
(known only to the experimenter).
Acknowledgements
We thank Zili Liu for helpful discussions. We gratefully acknowledge funding support from the
American Association of University Women (HL), NSF0413214 and W.M. Keck Foundation (ALY).
References
[1] Geisler, W.S. (2002) ?Ideal Observer Analysis?. In L. Chalupa and J. Werner (Eds). The Visual
Neuroscienes. Boston. MIT Press. 825-837.
[2] Barlow, H., and Tripathy, S.P. (1997) Correspondence noise and signal pooling in the detection of
coherent visual motion. Journal of Neuroscience, 17(20), 7954-7966.
[3] Green, D.M., and Swets, J.A. (1966) Signal detection theory and psychophysics. New York:
Wiley.
[4] Morrone, M.C., Burr, D. C., and Vaina, L. M. (1995) Two stages of visual processing for radial
and circular motion. Nature, 376(6540), 507-509.
[5] Neri, P., Morrone, M.C., and Burr, D.C. (1998) Seeing biological motion. Nature, 395(6705),
894-896.
[6] Song, Y., and Perona, P. (2000) A computational model for motion detection and direction discrimination in humans. IEEE computer society workshop on Human Motion, Austin, Texas.
[7] Wallace, J.M and Mamassian, P. (2004) The efficiency of depth discrimination for non-transparent
and transparent stereoscopic surfaces. Vision Research, 44, 2253-2267.
[8] Yuille, A.L. and Grzywacz, N.M. (1988) A computational theory for the perception of coherent
visual motion. Nature, 333,71-74,
[9] Weiss, Y., and Adelson, E.H. (1998) Slow and smooth: A Bayesian theory for the combination of
local motion signals in human vision Technical Report 1624. Massachusetts Institute of Technology.
[10] Ullman, S. (1979) The interpretation of Visual Motion. MIT Press, Cambridge, MA, 1979.
| 2807 |@word trial:1 proportion:2 open:2 seek:1 harder:1 configuration:1 liu:1 rightmost:1 comparing:1 surprising:1 enables:3 plot:2 designed:1 discrimination:8 alone:1 generative:1 detecting:2 firstly:1 simpler:1 combine:1 burr:2 swets:2 roughly:2 nor:1 wallace:1 decreasing:1 little:1 window:1 estimating:1 underlying:1 matched:1 panel:10 maximizes:1 what:1 psych:1 degrading:2 act:1 exactly:1 bio:16 unit:1 before:1 negligible:1 local:1 limit:3 approximately:2 limited:1 range:9 practical:1 enforces:1 testing:1 differs:1 displacement:2 matching:3 radial:1 seeing:1 operator:1 risk:6 straightforward:1 resolution:4 formulate:1 vaina:1 rule:5 classic:2 grzywacz:2 us:1 agreement:1 trend:6 element:1 velocity:3 chalupa:1 bottom:2 capture:1 calculate:2 complexity:1 solving:2 yuille:4 efficiency:1 forced:1 describe:1 solve:1 otherwise:2 objectively:1 statistic:3 ability:1 itself:1 propose:1 interaction:1 poorly:1 achieve:1 los:2 ent:1 keck:1 help:1 derive:3 stat:1 nearest:4 solves:1 implemented:1 predicted:1 indicate:1 convention:1 differ:2 direction:3 correct:4 human:60 argued:1 transparent:2 biological:1 secondly:1 mapping:1 predict:1 vary:1 purpose:3 mit:2 gaussian:1 always:1 rather:1 pn:1 derived:1 likelihood:4 greatly:1 contrast:1 detect:1 sense:1 helpful:1 inference:1 bt:12 perona:1 dnn:12 pixel:1 overall:1 arg:1 spatial:9 constrained:1 psychophysics:2 field:3 once:1 adelson:1 alter:1 report:1 stimulus:10 neighbour:2 detection:15 interest:1 huge:2 investigate:2 possibility:1 circular:1 alignment:1 extreme:2 light:1 closer:1 xy:1 vi0:2 mamassian:1 penalizes:1 circle:2 deformation:1 minimal:1 earlier:1 werner:1 lattice:4 uniform:1 density:1 fundamental:2 geisler:1 again:1 possibly:1 woman:1 unmatched:1 worse:2 book:1 american:1 ullman:2 account:5 sdt:3 includes:1 depends:2 performed:5 later:1 observer:14 closed:2 observing:1 start:2 bayes:14 hongjing:2 degraded:1 variance:2 bayesian:11 lu:1 implausible:1 reach:3 ed:1 against:1 pp:1 experimenter:2 adjusting:1 massachusetts:1 knowledge:2 higher:3 wei:1 formulation:1 stage:1 until:1 horizontal:3 replacing:1 lack:3 perhaps:2 concept:1 true:3 barlow:20 normalized:1 illustrated:1 tripathy:20 width:1 noted:1 motion:50 percent:1 image:5 funding:1 insensitive:1 association:1 occurred:1 interpretation:1 cambridge:1 gratefully:1 had:1 dot:42 moving:3 surface:1 closest:1 recent:1 showed:6 slowness:5 binary:2 somewhat:1 impose:1 determine:6 signal:7 ii:3 alan:1 technical:1 smooth:16 match:7 calculation:1 equally:1 prediction:1 basic:1 essentially:1 vision:3 achieved:1 justified:2 background:1 concluded:1 probably:1 comment:1 subject:15 pooling:1 flow:2 inconsistent:1 near:1 ideal:23 easy:1 fit:1 psychology:1 restrict:1 zili:1 cn:9 tm:2 oneself:1 texas:1 angeles:2 whether:2 motivated:2 neri:1 song:1 york:1 prefers:1 fake:1 generate:2 restricts:1 designer:1 neuroscience:1 stereoscopic:1 paramters:1 implausibly:2 threshold:32 drawn:1 neither:1 lowering:1 graph:1 fraction:1 fourth:1 uncertainty:4 striking:1 almost:1 separation:1 decision:7 coherence:14 simplification:1 correspondence:12 adapted:2 occur:1 comparision:1 constraint:2 ucla:4 aspect:1 speed:10 min:1 performing:2 relatively:1 coh:9 department:2 influential:1 combination:2 poor:5 em:1 hl:4 discus:1 mechanism:3 know:1 gaussians:1 generic:4 alternative:2 slower:1 binomial:1 remaining:1 include:1 top:2 giving:2 society:1 psychophysical:5 move:3 coherently:6 traditional:1 unable:1 thank:1 degrade:5 argue:1 reason:1 length:1 insufficient:1 ratio:3 setup:1 design:2 motivates:1 unknown:1 perform:8 allowing:1 benchmark:1 acknowledge:1 precise:2 frame:18 aly:1 specified:1 baysian:2 coherent:10 bar:1 perception:1 regime:1 green:2 explanation:1 ia:9 natural:2 technology:1 incoherent:5 review:1 prior:2 acknowledgement:1 fully:1 loss:1 versus:1 lv:1 foundation:1 sufficient:1 consistent:4 xp:1 thresholding:1 translation:10 austin:1 changed:1 summary:1 allow:1 institute:1 neighbor:2 taking:1 calculated:1 depth:1 transition:1 world:1 simplified:1 historical:1 far:7 dealing:1 summing:2 conclude:4 xi:20 morrone:2 why:2 nature:4 transfer:1 ca:2 investigated:1 complex:1 arrow:2 noise:8 neuronal:1 differed:1 slow:17 wiley:1 fails:1 position:3 candidate:1 rederive:1 rk:3 bad:1 showing:1 explored:1 consist:1 workshop:1 boston:1 simply:1 likely:1 visual:8 horizontally:1 tracking:1 corresponds:1 determines:1 ma:1 goal:1 consequently:1 considerable:1 change:1 fw:3 specifically:1 except:2 degradation:7 total:1 partly:2 experimental:5 ya:19 attempted:1 shannon:1 quest:1 support:1 evaluate:1 tested:1 phenomenon:1 |
1,990 | 2,808 | Rodeo: Sparse Nonparametric Regression in
High Dimensions
John Lafferty
School of Computer Science
Carnegie Mellon University
Larry Wasserman
Department of Statistics
Carnegie Mellon University
Abstract
We present a method for nonparametric regression that performs bandwidth selection and variable selection simultaneously. The approach is
based on the technique of incrementally decreasing the bandwidth in directions where the gradient of the estimator with respect to bandwidth
is large. When the unknown function satisfies a sparsity condition, our
approach avoids the curse of dimensionality, achieving the optimal minimax rate of convergence, up to logarithmic factors, as if the relevant variables were known in advance. The method?called rodeo (regularization
of derivative expectation operator)?conducts a sequence of hypothesis
tests, and is easy to implement. A modified version that replaces hard
with soft thresholding effectively solves a sequence of lasso problems.
1
Introduction
Estimating a high dimensional regression function is notoriously difficult due to the
?curse of dimensionality.? Minimax theory precisely characterizes the curse. Let Yi =
m(Xi ) + ?i , i = 1, . . . , n where Xi = (Xi (1), . . . , Xi (d)) ? Rd is a d-dimensional
covariate, m : Rd ? R is the unknown function to estimate, and ?i ? N (0, ? 2 ). Then if
m is in W2 (c), the d-dimensional Sobolev ball of order two and radius c, it is well known
that
lim inf n4/(4+d) inf sup R(m
b n , m) > 0 ,
(1)
n??
R
m
b n m?W2 (c)
where R(m
b n , m) = Em (m
b n (x) ? m(x))2 dx is the risk of the estimate m
b n constructed
on a sample of size n (Gy?orfi et al. 2002). Thus, the best rate of convergence is n?4/(4+d) ,
which is impractically slow if d is large.
However, for some applications it is reasonable to expect that the true function only depends
on a small number of the total covariates. Suppose that m satisfies such a sparseness
condition, so that m(x) = m(xR ) where xR = (xj : j ? R), R ? {1, . . . , d} is a subset
of the d covariates, of size r = |R| ? d. We call {xj }j?R the relevant variables. Under
this sparseness assumption we can hope to achieve the better minimax convergence rate of
n?4/(4+r) if the r relevant variables can be isolated. Thus, we are faced with the problem
of variable selection in nonparametric regression.
A large body of previous work has addressed this fundamental problem, which has led
to a variety of methods to combat the curse of dimensionality. Many of these are based
on very P
clever, though often heuristic techniques. For additive models of the form
f (x) =
j fj (xj ), standard methods like stepwise selection, Cp and AIC can be used
(Hastie et al. 2001). For spline models, Zhang et al. (2005) use likelihood basis pursuit, essentially the lasso adapted to the spline setting. CART (Breiman et al. 1984) and
MARS (Friedman 1991) effectively perform variable selection as part of their function fitting. More recently, Li et al. (2005) use independence testing for variable selection and
B?uhlmann and Yu (2005) introduced a boosting approach. While these methods have met
with varying degrees of empirical success, they can be challenging to implement and demanding computationally. Moreover, these methods are typically difficult to analyze theoretically, and so often come with no formal guarantees. Indeed, the theoretical analysis
of sparse parametric estimators such as the lasso (Tibshirani 1996) is difficult, and only
recently has significant progress been made on this front (Donoho 2004; Fu and Knight
2000).
In this paper we present a new approach to sparse nonparametric function estimation that
is both computationally simple and amenable to theoretical analysis. We call the general
framework rodeo, for regularization of derivative expectation operator. It is based on the
idea that bandwidth and variable selection can be simultaneously performed by computing
the infinitesimal change in a nonparametric estimator as a function of the smoothing parameters, and then thresholding these derivatives to effectively get a sparse estimate. As
a simple version of this principle we use hard thresholding, effectively carrying out a sequence of hypothesis tests. A modified version that replaces testing with soft thresholding
effectively solves a sequence of lasso problems. The potential appeal of this approach is
that it can be based on relatively simple and theoretically well understood nonparametric
techniques such as local linear smoothing, leading to methods that are simple to implement
and can be used in high dimensional problems. Moreover, we show that the rodeo can
achieve near optimal minimax rates of convergence, and therefore circumvents the curse of
dimensionality when the true function is indeed sparse. When applied in one dimension,
our method yields a locally optimal bandwidth. We present experiments on both synthetic
and real data that demonstrate the effectiveness of the new approach.
2
Rodeo: The Main Idea
The key idea in our approach is as follows. Fix a point x and let m
b h (x) denote an estimator
of m(x) based on a vector of smoothing parameters h = (h1 , . . . , hd ). If c is a scalar,
then we write h = c to mean h = (c, . . . , c). Let M (h) = E(m
b h (x)) denote the mean of
m
b h (x). For now, assume that xi is one of the observed data points and that m
b 0 (x) = Yi .
In that case, m(x) = M (0) = E(Yi ). If P = (h(t) : 0 ? t ? 1) is a smooth path through
the set of smoothing parameters with h(0) = 0 and h(1) = 1 (or any other fixed, large
bandwidth) then
Z 1
Z 1
dM (h(s))
?
m(x) = M (0) = M (1) ?
ds = M (1) ?
D(s), h(s)
ds
ds
0
0
T
?M
?M
?
where D(h) = ?M (h) = ?h
,
.
.
.
,
is the gradient of M (h) and h(s)
= dh(s)
?hj
ds is
j
the derivative of h(s) along the path. A biased, low variance estimator of M (1) is m
b 1 (x).
An unbiased estimator of D(h) is
T
?m
b h (x)
?m
b h (x)
Z(h) =
,...,
.
(2)
?h1
?hd
The naive estimator
Z 1
?
m(x)
b
=m
b 1 (x) ?
Z(s), h(s)
ds
(3)
0
h2
Start
Rodeo path
Ideal path
Figure 1: The bandwidths for the relevant
variables (h2 ) are shrunk, while the bandwidths for the irrelevant variables (h1 ) are
kept relatively large. The simplest rodeo algorithm shrinks the bandwidths in discrete
steps 1, ?, ? 2 , . . . for some 0 < ? < 1.
Optimal
bandwidth
h1
is identically equal to m
b 0 (x) = Yi , which has poor risk since the variance of Z(h) is large
for small h. However, our sparsity assumption on m suggests that there should be paths for
b
which D(h) is also sparse. Along such a path, we replace Z(h) with an estimator D(h)
that makes use of the sparsity assumption. Our estimate of m(x) is then
Z 1
b
?
m(x)
e
=m
b 1 (x) ?
D(s),
h(s)
ds .
(4)
0
To implement this idea we need to do two things: (i) we need to find a sparse path and (ii)
we need to take advantage of this sparseness when estimating D along that path.
The key observation is that if xj is irrelevant, then we expect that changing the bandwidth
hj for that variable should cause only a small change in the estimator m
b h (x). Conversely,
if xj is relevant, then we expect that changing the bandwidth hj for that variable should
cause a large change in the estimator. Thus, Zj = ? m
b h (x)/?hj should discriminate between relevant and irrelevant covariates. To simplify the procedure, we can replace the
continuum of bandwidths with a discrete set where each hj ? B = {h0 , ?h0 , ? 2 h0 , . . .}
for some 0 < ? < 1. Moreover, we can proceed in a greedy fashion by estimating D(h)
b j (h) = 0 when hj < b
sequentially with hj ? B and setting D
hj , where b
hj is the first h
such that |Zj (h)| < ?j (h) for some threshold ?j . This greedy version, coupled with the
hard threshold estimator, yields m(x)
e
= m
b bh (x). A conceptual illustration of the idea is
shown in Figure 1. This idea can be implemented using a greedy algorithm, coupled with
the hard threshold estimator, to yield a bandwidth selection procedure based on testing.
This approach to bandwidth selection is similar to that of Lepski et al. (1997), which uses a
more refined test leads to estimators that achieve good spatial adaptation over large function
classes. Our approach is also similar to a method of Ruppert (1997) that uses a sequence of
decreasing bandwidths and then estimates the optimal bandwidth by estimating the mean
squared error as a function of bandwidth. Our greedy approach tests whether an infinitesimal change in the bandwidth from its current setting leads to a significant change in the
estimate, and is more easily extended to a practical method in higher dimensions. Related
work of Hristache et al. (2001) focuses on variable selection in multi-index models rather
than on bandwidth estimation.
3
Rodeo using Local Linear Regression
We now present the multivariate case in detail, using local linear smoothing as the basic
method since it is known to have many good properties. Let x = (x(1), . . . , x(d)) be some
point at which we want to estimate m. Let m
b H (x) denote the local linear estimator of
m(x) using bandwidth matrix H. Thus,
m
b H (x) = eT1 (XxT Wx Xx )?1 XxT Wx Y,
?
1
? ..
Xx = ? .
1
?
(X1 ? x)T
?
..
?
.
T
(Xn ? x)
(5)
where e1 = (1, 0, . . . , 0)T , and Wx is the diagonal matrix with (i, i) element KH (Xi ? x)
?1
?1
and
b H can be written as m
b H (x) =
Pn KH (u) = |H| K(H u). The estimator m
i=1 G(Xi , x, h) Yi where
1
T
T
?1
G(u, x, h) = e1 (Xx Wx Xx )
KH (u ? x)
(6)
(u ? x)T
is called the effective kernel. We assume that the covariates are random with sampling density f (x), and make the same assumptions as Ruppert and Wand (1994) in their analysis
of the bias and variance of local linear regression.
In particular, (i) the kernel K has comR
pact support with zero odd moments and uu? K(u) du = ?2 (K)I and (ii) the sampling
density f (x) is continuously differentiable and strictly positive. In the version of the algorithm that follows, we take K to be a product kernel and H to be diagonal with elements
h = (h1 , . . . , hd ).
Our method is based on the statistic
n
?m
b h (x) X
Gj (Xi , x, h)Yi
Zj =
=
?hj
i=1
where Gj (u, x, h) =
Zj =
?G(u,x,h)
.
?hj
(7)
Straightforward calculations show that
?m
b h (x)
?
?1 ? ?Wx
Xx
= = e?
(Y ? Xx ?
b)
1 (Xx Wx Xx )
?hj
?hj
(8)
where ?
b = (Xx? Wx Xx )?1 Xx? Wx Y is the coefficient vector for the local linear fit. Note
Qd
that the factor |H|?1 = i=1 1/hi in the kernel cancels in the expression for m,
b and
therefore we can ignore it in our calculation of Zj . Assuming a product kernel we have
?
?
d
d
Y
Y
Wx = diag ?
K((X1j ? xj )/hj ), . . . ,
K((Xnj ? xj )/hj )?
(9)
j=1
j=1
and ?Wx /?hj = Wx Dj where
? log K((X1j ? xj )/hj )
? log K((Xnj ? xj )/hj )
Dj = diag
,...,
?hj
?hj
(10)
?
?1 ?
and thus Zj = e?
Xx Wx Dj (Y ? Xx ?
b). For example, with the Gaussian
1 (Xx Wx Xx )
1
2
kernel K(u) = exp(?u /2) we have Dj = h3 diag (X1j ? xj )2 , . . . , (Xnj ? xj )2 .
j
Let
?j
s2j
? ?j (h) = E(Zj |X1 , . . . , Xn ) =
n
X
Gj (Xi , x, h)m(Xi )
(11)
i=1
? s2j (h) = V(Zj |X1 , . . . , Xn ) = ? 2
n
X
Gj (Xi , x, h)2 .
(12)
i=1
Then the hard thresholding version of the rodeo algorithm is given in Figure 2.
The algorithm requires that we insert an estimate ?
b of ? in (12). One estimate of ? can
be obtained by generalizing a method of Rice (1984). For i < ?, let di? = kXi ? X? k.
Fix an integer J and let E denote the
to the J smallest
Pset of pairs (i,2?) corresponding
1
2
values of di? . Now define ?
b2 = 2J
(Y
?
Y
)
.
Then
E(b
?
)
=
? 2 + bias where
i
?
i,??E
Rodeo: Hard thresholding version
1. Select parameter 0?< ? < 1and initial bandwidth h0 slowly decreasing to zero,
with h0 = ? 1/ log log n . Let cn = ?(1) be a sequence satisfying dcn =
?(log n).
2. Initialize the bandwidths, and activate all covariates:
(a) hj = h0 , j = 1, 2, . . . , d.
(b) A = {1, 2, . . . , d}
3. While A is nonempty, do for each j ? A:
(a) Compute the estimated derivative expectation: Zj (equation 7) and sj (equation 12).
p
(b) Compute the threshold ?j = sj 2 log(dcn ).
(c) If |Zj | ? ?j , then set hj ? ?hj , otherwise remove j from A.
4. Output bandwidths h? = (h1 , . . . , hd ) and estimator m(x)
e
=m
b h? (x).
Figure 2: The hard thresholding version of the rodeo, which can be applied using the
derivatives Zj of any nonparametric smoother.
bias ? D supx
?f (x)
j?R ?xj with D = maxi,??E kXi ? X? k. There is a bias-variance
P
tradeoff: large J makes ?
b2 positively biased, and small J makes ?
b2 highly variable. Note
however that the bias is mitigated by sparsity (small r). This is the estimator used in our
examples.
4
Analysis
In this section we present some results on the properties of the resulting estimator. Formally, we use a triangular array approach so that f (x), m(x), d and r can all change as n
changes. For convenience of notation we assume that the covariates are numbered such that
the relevant variables xj correspond to 1 ? j ? r, and the irrelevant variables to j > r. To
begin, we state the following technical lemmas on the mean and variance of Zj .
Lemma 4.1 . Suppose that K is a product kernel with bandwidth vector h = (h1 , . . . , hd ).
If the sampling density f is uniform, then ?j = 0 for all j ? Rc . More generally, assuming
that r is bounded, we have the following when hj ? 0: If j ? Rc the derivative of the bias
is
?
2
?j =
E[m
b H (x) ? m(x)] = ?tr (HR HR ) ?22 (?j log f (x)) hj + oP (hj ) (13)
?hj
HR 0
where the Hessian of m(x) is H =
and HR = diag(h21 , . . . , h2r ). For j ? R
0 0
we have
?
?j =
E[m
b H (x) ? m(x)] = hj ?2 mjj (x) + oP (hj ).
(14)
?hj
Lemma 4.2 . Let C =
? 2 R(K)
4m(x)
where R(K) =
s2j = Var(Zj |X1 , . . . , Xn ) =
C
nh2j
R
K(u)2 du. Then, if hj = o(1),
!
d
Y
1
1 + oP (1) .
(15)
hk
k=1
These lemmas parallel the calculations of Ruppert and Wand (1994) except for the difference that the irrelevant variables have different leading terms in the expansions than
relevant variables.
Our main theoretical result characterizes the asymptotic running time, selected bandwidths,
and risk of the algorithm. In order to get a practical algorithm, we need to make assumptions on the functions m and f .
(A1) For some constant k > 0, each j > r satisfies
?j log f (x) = O
(A2) For each j ? r,
logk n
n1/4
mjj (x) 6= 0 .
!
(16)
(17)
Explanation of the Assumptions. To give the intuition behind these assumptions, recall
from Lemma 4.1 that
Aj hj + oP (hj ) j ? r
?j =
(18)
Bj hj + oP (hj ) j > r
where
Aj = ?2 mjj (x), Bj = ?tr(HH)?22 (?j log f (x))2 .
(19)
Moreover, ?j = 0 when the sampling density f is uniform or the data are on a regular
grid. Consider assumption (A1). If f is uniform then this assumption is automatically
satisfied since then ?j (s) = 0 for j > r. More generally, ?j is approximately proportional
to (?j log f (x))2 for j > r which implies that |?j | ? 0 for irrelevant variables if f
is sufficiently smooth in the variable xj . Hence, assumption (A1) can be interpreted as
requiring that f is sufficiently smooth in the irrelevant dimensions.
Now consider assumption (A2). Equation (18) ensures that ?j is proportional to
hj |mjj (x)| for small hj . Since we take the initial bandwidth h0 to be decreasingly slowly
with n, (A2) implies that |?j (h)| ? chj |mjj (x)| for some constant c > 0, for sufficiently
large n.
eP (an ) to mean that Yn = OP (bn an ) where bn is logaIn the following we write Yn = O
e
rithmic in n; similarly, an = ?(bn ) if an = ?(bn cn ) where cn is logarithmic in n.
Theorem 4.3. Suppose assumptions (A1) and (A2) hold. In addition, suppose that dmin =
e
e . Then the number of
minj?r |mjj (x)| = ?(1)
and dmax = maxj?r |mjj (x)| = O(1)
iterations Tn until the rodeo stops satisfies
1
1
P
log1/? (nan ) ? Tn ?
log1/? (nbn ) ?? 1
(20)
4+r
4+r
e
e . Moreover, the algorithm outputs bandwidths h? that
where an = ?(1)
and bn = O(1)
satisfy
1
?
P hj ?
for all j > r ?? 1
(21)
logk n
and
P h0 (nbn )?1/(4+r) ? h?j ? h0 (nan )?1/(4+r) for all j ? r ?? 1 .
(22)
Corollary 4.4 . Under the conditions of Theorem 4.3, the risk R(h? ) of the rodeo estimator satisfies
eP n?4/(4+r) .
R(h? ) = O
(23)
In the one-dimensional case, this result shows that the algorithm recovers the locally optimal bandwidth, giving an adaptive estimator, and in general attains the optimal (up to
logarithmic factors) minimax rate of convergence.
The proofs of these results are given in the full version of the paper.
5
Some Examples and Extensions
Figure 3 illustrates the rodeo on synthetic and real data. The left plot shows the bandwidths
obtained on a synthetic dataset with n = 500 points of dimension d = 20. The covariates
are generated as xi ? Uniform(0, 1), the true function is m(x) = 2(x1 +1)2 +2 sin(10x2 ),
and ? = 1. The results are averaged over 50 randomly generated data sets; note that the displayed bandwidth paths are not monotonic because of this averaging. The plot shows how
the bandwidths of the relevant variables shrink toward zero, while the bandwidths of the irrelevant variables remain large. Simulations on other synthetic data sets, not included here,
are similar and indicate that the algorithm?s performance is consistent with our theoretical
analysis.
The framework introduced here has many possible generalizations. While we have focused on estimation of m locally at a point x, the idea can be extended to carry out global
bandwidth and variable selection by averaging over multiple evaluation points x1 , . . . , xk .
These could be points interest for estimation, could be randomly chosen, or could be taken
to be identical to the observed Xi s. In addition, it is possible to consider more general
paths, for example using soft thresholding or changing only the bandwidth corresponding
to the largest |Zj |/?j .
Such a version of the rodeo can be seen as a nonparametric counterpart to least angle
regression (LARS) (Efron et al. 2004), a refinement of forward stagewise regression in
which one adds the covariate most correlated with the residuals of the current fit, in small,
incremental steps. Note first that Zj is essentially the correlation between the Yi s and the
Gj (Xi , x, h)s (the change in the effective kernel). Reducing the bandwidth is like adding in
more of that variable. Suppose now that we make the following modifications to the rodeo:
(i) change the bandwidths one at a time, based on the largest Zj? = |Zj |/?j , (ii) reduce
the bandwidth continuously, rather than in discrete steps, until the largest Zj? is equal to the
next largest. Figure 3 (right) shows the result of running this greedy version of the rodeo on
the diabetes dataset used to illustrate LARS. The algorithm averages Zj? over a randomly
chosen set of k = 100 data points. The resulting variable ordering is seen to be very similar
to, but different from, the ordering obtained from the parametric LARS fit.
Acknowledgments
We thank the reviewers for their helpful comments. Research supported in part by NSF
grants IIS-0312814, IIS-0427206, and DMS-0104016, and NIH grants R01-CA54852-07
and MH57881.
References
L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and regression trees.
Wadsworth Publishing Co Inc, 1984.
P. B?uhlmann and B. Yu. Boosting, model selection, lasso and nonnegative garrote. Technical report,
Berkeley, 2005.
1
0.0
2
5
10
Rodeo Step
15
0.5
0.4
0.3
9
7
4
1
2
8
0.2
Bandwidth
3
0.1
0.6
0.4
0.2
Average Bandwidth
0.8
11
6
16
8
3
4
15
18
19
5
7
10
13
17
20
9
14
0.0
1.0
12
0
20
40
60
80
100
Greedy Rodeo Step
Figure 3: Left: Average bandwidth output by the rodeo for a function with r = 2 relevant variables
in d = 20 dimensions (n = 500, with 50 trials). Covariates are generated as xi ? Uniform(0, 1),
the true function is m(x) = 2(x1 + 1)3 + 2 sin(10x2 ), and ? = 1, fit at the test point x =
( 12 , . . . , 12 ). The variance is greater for large step sizes since the rodeo runs that long for fewer data
sets. Right: Greedy rodeo on the diabetes data, used to illustrate LARS (Efron et al. 2004). A set of
k = 100 of the total n = 442 points were sampled (d = 10), and the bandwidth for the variable
with largest average |Zj |/?j was reduced in each step. The variables were selected in the order 3
(body mass index), 9 (serum), 7 (serum), 4 (blood pressure), 1 (age), 2 (sex), 8 (serum), 5 (serum),
10 (serum), 6 (serum). The parametric LARS algorithm adds variables in the order 3, 9, 4, 7, 2, 10,
5, 8, 6, 1. One notable difference is in the position of the age variable.
D. Donoho. For most large underdetermined systems of equations, the minimal ?1 -norm near-solution
approximates the sparest near-solution. Technical report, Stanford, 2004.
B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. The Annals of Statistics,
32:407?499, 2004.
J. H. Friedman. Multivariate adaptive regression splines. The Annals of Statistics, 19:1?67, 1991.
W. Fu and K. Knight. Asymptotics for lasso type estimators. The Annals of Statistics, 28:1356?1378,
2000.
L. Gy?orfi, M. Kohler, A. Krzy?zak, and H. Walk. A Distribution-Free Theory of Nonparametric
Regression. Springer-Verlag, 2002.
T. Hastie, R. Tibshirani, and J. H. Friedman. The Elements of Statistical Learning: Data Mining,
Inference, and Prediction. Springer-Verlag, 2001.
M. Hristache, A. Juditsky, J. Polzehl, and V. Spokoiny. Structure adaptive approach for dimension
reduction. Ann. Statist., 29:1537?1566, 2001.
O. V. Lepski, E. Mammen, and V. G. Spokoiny. Optimal spatial adaptation to inhomogeneous
smoothness: An approach based on kernel estimates with variable bandwidth selectors. The Annals
of Statistics, 25:929?947, 1997.
L. Li, R. D. Cook, and C. Nachsteim. Model-free variable selection. J. R. Statist. Soc. B., 67:285?299,
2005.
J. Rice. Bandwidth choice for nonparametric regression. The Annals of Statistics, 12:1215?1230,
1984.
D. Ruppert. Empirical-bias bandwidths for local polynomial nonparametric regression and density
estimation. Journal of the American Statistical Association, 92:1049?1062, 1997.
D. Ruppert and M. P. Wand. Multivariate locally weighted least squares regression. The Annals of
Statistics, 22:1346?1370, 1994.
R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society, Series B, Methodological, 58:267?288, 1996.
H. Zhang, G. Wahba, Y. Lin, M. Voelker, R. K. Ferris, and B. Klein. Variable selection and model
building via likelihood basis pursuit. J. of the Amer. Stat. Assoc., 99(467):659?672, 2005.
| 2808 |@word trial:1 version:11 polynomial:1 norm:1 sex:1 simulation:1 bn:5 pset:1 pressure:1 tr:2 carry:1 reduction:1 moment:1 initial:2 series:1 xnj:3 current:2 dx:1 written:1 john:1 additive:1 wx:13 remove:1 plot:2 juditsky:1 greedy:7 selected:2 fewer:1 cook:1 xk:1 boosting:2 zhang:2 rc:2 along:3 constructed:1 fitting:1 theoretically:2 indeed:2 multi:1 decreasing:3 automatically:1 curse:5 begin:1 estimating:4 moreover:5 xx:15 mitigated:1 notation:1 bounded:1 mass:1 interpreted:1 guarantee:1 combat:1 berkeley:1 assoc:1 grant:2 yn:2 positive:1 understood:1 local:7 path:10 approximately:1 suggests:1 challenging:1 conversely:1 co:1 averaged:1 practical:2 acknowledgment:1 testing:3 implement:4 xr:2 procedure:2 asymptotics:1 empirical:2 orfi:2 regular:1 numbered:1 get:2 convenience:1 clever:1 selection:15 operator:2 bh:1 risk:4 reviewer:1 comr:1 dcn:2 straightforward:1 serum:6 focused:1 wasserman:1 estimator:21 array:1 hd:5 annals:6 suppose:5 us:2 hypothesis:2 diabetes:2 element:3 satisfying:1 observed:2 ep:2 ensures:1 ordering:2 knight:2 intuition:1 covariates:8 carrying:1 basis:2 easily:1 xxt:2 effective:2 activate:1 h0:9 refined:1 heuristic:1 stanford:1 voelker:1 otherwise:1 triangular:1 statistic:8 sequence:6 advantage:1 differentiable:1 product:3 adaptation:2 relevant:10 achieve:3 kh:3 convergence:5 incremental:1 illustrate:2 stat:1 op:6 h3:1 odd:1 school:1 progress:1 soc:1 implemented:1 solves:2 come:1 uu:1 met:1 qd:1 direction:1 implies:2 radius:1 inhomogeneous:1 indicate:1 shrunk:1 lars:5 larry:1 fix:2 generalization:1 underdetermined:1 strictly:1 insert:1 extension:1 hold:1 sufficiently:3 exp:1 bj:2 continuum:1 smallest:1 a2:4 estimation:5 uhlmann:2 et1:1 largest:5 weighted:1 hope:1 gaussian:1 modified:2 rather:2 pn:1 hj:38 shrinkage:1 breiman:2 varying:1 krzy:1 corollary:1 focus:1 methodological:1 likelihood:2 hk:1 attains:1 helpful:1 inference:1 typically:1 classification:1 smoothing:5 spatial:2 initialize:1 wadsworth:1 equal:2 sampling:4 identical:1 yu:2 cancel:1 report:2 spline:3 simplify:1 randomly:3 simultaneously:2 maxj:1 n1:1 friedman:4 interest:1 highly:1 mining:1 evaluation:1 behind:1 amenable:1 fu:2 conduct:1 tree:1 walk:1 isolated:1 theoretical:4 minimal:1 soft:3 s2j:3 subset:1 uniform:5 front:1 supx:1 kxi:2 synthetic:4 density:5 fundamental:1 continuously:2 squared:1 satisfied:1 slowly:2 american:1 derivative:7 leading:2 li:2 potential:1 gy:2 b2:3 coefficient:1 inc:1 spokoiny:2 satisfy:1 notable:1 depends:1 performed:1 h1:7 analyze:1 characterizes:2 sup:1 start:1 parallel:1 square:1 variance:6 yield:3 correspond:1 notoriously:1 minj:1 infinitesimal:2 dm:2 proof:1 di:2 recovers:1 stop:1 sampled:1 dataset:2 recall:1 lim:1 efron:3 dimensionality:4 x1j:3 higher:1 amer:1 though:1 mar:1 shrink:2 until:2 d:6 correlation:1 incrementally:1 stagewise:1 aj:2 building:1 requiring:1 true:4 unbiased:1 counterpart:1 regularization:2 hence:1 sin:2 mammen:1 stone:1 demonstrate:1 tn:2 performs:1 cp:1 fj:1 recently:2 nih:1 association:1 approximates:1 mellon:2 significant:2 zak:1 smoothness:1 rd:2 grid:1 similarly:1 dj:4 gj:5 add:2 multivariate:3 inf:2 irrelevant:8 verlag:2 success:1 yi:7 seen:2 greater:1 ii:5 smoother:1 full:1 multiple:1 smooth:3 technical:3 calculation:3 long:1 lin:1 e1:2 a1:4 prediction:1 regression:16 basic:1 essentially:2 expectation:3 iteration:1 kernel:9 addition:2 want:1 addressed:1 w2:2 biased:2 comment:1 cart:1 thing:1 lafferty:1 effectiveness:1 call:2 integer:1 near:3 ideal:1 easy:1 identically:1 variety:1 xj:14 independence:1 fit:4 hastie:3 bandwidth:45 lasso:7 wahba:1 reduce:1 idea:7 cn:3 tradeoff:1 whether:1 expression:1 proceed:1 cause:2 hessian:1 generally:2 nonparametric:11 locally:4 statist:2 simplest:1 reduced:1 zj:20 nsf:1 estimated:1 tibshirani:4 klein:1 carnegie:2 write:2 discrete:3 key:2 threshold:4 achieving:1 blood:1 changing:3 h2r:1 kept:1 wand:3 run:1 angle:2 reasonable:1 sobolev:1 circumvents:1 garrote:1 nbn:2 hi:1 nan:2 aic:1 replaces:2 nonnegative:1 polzehl:1 adapted:1 precisely:1 x2:2 relatively:2 department:1 ball:1 poor:1 remain:1 em:1 n4:1 modification:1 taken:1 computationally:2 equation:4 dmax:1 nonempty:1 hh:1 pursuit:2 ferris:1 hristache:2 running:2 publishing:1 giving:1 society:1 r01:1 parametric:3 diagonal:2 gradient:2 thank:1 toward:1 assuming:2 index:2 illustration:1 difficult:3 olshen:1 unknown:2 perform:1 dmin:1 observation:1 displayed:1 extended:2 introduced:2 pair:1 decreasingly:1 sparsity:4 royal:1 explanation:1 demanding:1 hr:4 residual:1 pact:1 minimax:5 log1:2 naive:1 coupled:2 faced:1 asymptotic:1 expect:3 proportional:2 var:1 age:2 h2:2 degree:1 consistent:1 thresholding:8 principle:1 supported:1 free:2 formal:1 bias:7 johnstone:1 sparse:7 dimension:7 xn:4 avoids:1 forward:1 made:1 adaptive:3 refinement:1 sj:2 selector:1 ignore:1 global:1 sequentially:1 conceptual:1 xi:15 lepski:2 rodeo:22 du:2 expansion:1 diag:4 main:2 body:2 x1:7 positively:1 fashion:1 slow:1 position:1 theorem:2 covariate:2 maxi:1 appeal:1 stepwise:1 adding:1 effectively:5 logk:2 illustrates:1 sparseness:3 generalizing:1 logarithmic:3 led:1 rithmic:1 mjj:7 scalar:1 monotonic:1 springer:2 satisfies:5 dh:1 rice:2 donoho:2 ann:1 replace:2 hard:7 change:9 ruppert:5 included:1 except:1 reducing:1 impractically:1 averaging:2 lemma:5 called:2 total:2 discriminate:1 select:1 formally:1 h21:1 support:1 kohler:1 correlated:1 |
1,991 | 2,809 | Augmented Rescorla-Wagner and Maximum
Likelihood estimation.
Alan Yuille
Department of Statistics
University of California at Los Angeles
Los Angeles, CA 90095
[email protected]
Abstract
We show that linear generalizations of Rescorla-Wagner can perform
Maximum Likelihood estimation of the parameters of all generative models for causal reasoning. Our approach involves augmenting variables
to deal with conjunctions of causes, similar to the agumented model of
Rescorla. Our results involve genericity assumptions on the distributions
of causes. If these assumptions are violated, for example for the Cheng
causal power theory, then we show that a linear Rescorla-Wagner can
estimate the parameters of the model up to a nonlinear transformtion.
Moreover, a nonlinear Rescorla-Wagner is able to estimate the parameters directly to within arbitrary accuracy. Previous results can be used to
determine convergence and to estimate convergence rates.
1
Introduction
It is important to understand the relationship between the Rescorla-Wagner (RW) algorithm [1,2] and theories of learning based on maximum likelihood (ML) estimation of the
parameters of generative models [3,4,5]. The Rescorla-Wagner algorithm has been shown
to account for many experimental findings. But maximum likelihood offers the promise of
a sound statistical basis including the ability to learn sophisticated probabilistic models for
causal learning [6,7,8].
Previous work, summarized in section (2), showed a direct relationship between the basic
Rescorla-Wagner algorithm and maximum likelihood for the ?P model of causal learning
[4,9]. More recently, a generalization of Rescorla-Wagner was shown to perform maximum
likelihood estimation for both the ?P and the noisy-or models [10]. Throughout the paper,
we follow the common practice of studying the convergence of the expected value of the
weights and ignoring the fluctuations. The size of these fluctuations can be calculated
analytically and precise convergence quantified [10].
In this paper, we greatly extend the connections between Rescorla-Wagner and ML estimation. We show that two classes of generalized Rescorla-Wagner algorithms can perform
ML estimation for all generative models provided genericity assumptions on the causes
are satisfied. These generalizations include augmenting the set of variables to represent
conjunctive causes and are related to the augmented Rescorla-Wagner algorithm [2].
We also analyze the case where the genericity assumption breaks down and pay particular
attention to Chengs? causal power model [4,5]. We demonstrate that Rescorla-Wagner
can perform ML estimation for this model up to a nonlinear transformation of the model
parameters (i.e. Rescorla-Wagner does ML but in a different coordinate system). We sketch
how a nonlinear Rescorla-Wagner can estimate the parameters directly.
Convergence analysis from previous work [10] can be directly applied to these new
Rescorla-Wagner algorithms. This gives convergence conditions and put bounds on the
convergence rate. The analysis assumes that the data consists of i.i.d. samples from the
(unknown) causal distribution. But the results can also be applied in the piecewise iid case
(such as forward and backward blocking [11]).
2
Summary of Previous Work
We summarize pervious work relating maximum likelihood estimation of generative models with the Rescorla-Wagner algorithm [4,9,10]. This work assumes that there is a binaryvalued event E which can be caused by one or more of two binary-valued causes C1 , C2 .
The ?P and Noisy-or theories use generative models of form:
P?P (E = 1|C1 , C2 , ?1 , ?2 ) = ?1 C1 + ?2 C2
PN oisy?or (E = 1|C1 , C2 , ?1 , ?2 ) = ?1 C1 + ?2 C2 ? ?1 ?2 C1 C2 ,
(1)
(2)
where {?1 , ?2 } are the model parameters.
The training data consists of examples {E ? , C1? , C2? }. The parameters {?1 , ?2 } are estimated by Maximum Likelihood
Y
{?1? , ?2? } = arg max
P (E ? |C1? , C2? ; ?1 , ?2 )P (C1? , C2? ),
(3)
{?1 ,?2 }
?
where P (C1 , C2 ) is the distribution on the causes. It is independent of {?1 , ?2 } and does
not affect the Maximum Likelihood estimation, except for some non-generic cases to be
discussed in section (5).
An alternative approach to learning causal models is the Rescorla-Wagner algorithm which
updates weights V1 , V2 as follows:
V1t+1 = V1t + ?V1t , V2t+1 = V2t + ?V2t ,
(4)
where the update rule ?V can take forms like:
?V1 = ?1 C1 (E ? C1 V1 ? C2 V2 ), ?V2 = ?2 C2 (E ? C1 V1 ? C2 V2 ), basic rule
?V1 = ?1 C1 (1 ? C2 )(E ? V1 ), ?V2 = ?2 C2 (1 ? C1 )(E ? V2 ), variant rule.
(5)
(6)
It is known that if the basic update rule (5) is used then the weights converge to the ML
estimates of the parameters {?1 , ?2 } provided the data is generated by the ?P model (1)
[4,9] (but not for the noisy-or model).
If the variant update rule (6) is used, then the weights converge to the parameters {?1 , ?2 }
of the ?P model or the noisy-or model (2) depending on which model generates the data
[10].
3
Basic Ingredients
This section describes three basic ingredients of this work: (i) the generative models, (ii)
maximum likelihood, and (iii) the generalized Rescorla-Wagner algorithms.
Representing the generative models.
~ ?
We represent the distribution P (E|C;
~ ) by the function:
X
~ ?
~
P (E = 1|C;
~) =
?i hi (C),
(7)
i
~ are a set of basis functions and the {?i } are parameters. If the dimenwhere the {hi (C)}
~
sion of C is n, then the number of basis functions is 2n . All distributions of binary variables
can be represented in this form.
For example, if n = 2 we can use the basis:
~ = 1, h2 (C)
~ = C1 , h3 (C)
~ = C2 , h4 (C)
~ = C1 C2 ,
h1 (C)
(8)
Then the noisy-or model P (E = 1|C1 , C2 ) = ?1 C1 + ?2 C2 ? ?1 ?2 C1 C2 corresponds to
setting ?1 = 0, ?2 = ?1 , ?3 = ?2 , ?4 = ??1 ?2 .
Data Generation Assumption and Maximum Likelihood
~ ? : ? ? ?} are i.i.d. samples from P (E|C)P
~ (C).
~
We assume that the observed data {E ? , C
It is possible to adapt our results to cases where the data is piecewise i.i.d., such as blocking
experiments, but we have no space to describe this here.
Maximum Likelihood (ML) estimates the ?
~ by solving:
X
X
~ ?; ?
~ ? )} = arg min ?
~ ?; ?
log{P (E ? |C
~ )P (C
log P (E ? |C
~ ). (9)
?
~ ? = arg min ?
?
~
?
~
???
???
~ provided the distribution is generic.
Observe that the estimate of ?
~ is independent of P (C)
Important non-generic cases are treated in section (5).
Generalized Rescorla-Wagner.
The Rescorla-Wagner (RW) algorithm updates weights {Vi : i = 1, ..., n} by a discrete
iterative algorithm:
Vit+1 = Vit + ?Vit , i = 1, ..., n.
(10)
We assume a generalized form:
X
~ + Egi (C),
~ i, j = 1, ..., n
?Vi =
Vj fij (C)
(11)
j
~ {gi (C)}.
~
for functions {fij (C)},
It is easy to see that equations (5,6) are special cases.
4
Theoretical Results
We now gives sufficient conditions which ensure that the only fixed points of generalized
Rescorla-Wagner correspond to ML estimates of the parameters ?
~ of generative models
~ ?
P (E|C,
~ ). We then obtain two classes of generalized Rescorla-Wagner which satisfy
these conditions. For one class, convergence to the fixed points follow directly. For the
other class we need to adapt results from [10] to guarantee convergence to the fixed points.
~ of causes. We relax
Our results assume genericity conditions on the distribution P (C)
these conditions in section (5).
The number of weights {Vi } used by the Rescorla-Wagner algorithm is equal to the number
of parameters {?i } that specify the model. But many weights will remain zero unless
conjunctions of causes occur, see section (6).
Theorem 1. A sufficient condition for generalized Rescorla-Wagner (11), to have a unique
fixed point at the maximum likelihood estimates of the parameters of a generative model
~ ?
~ > ~ = ? < gi (C)h
~ j (C)
~ > ~ ? i, j and the matrix
P (E|C;
~ ) (7), is that < fij (C)
P (C)
P (C)
~ > ~ is invertible.
< fij (C)
P (C)
Proof. We calculate the expectation < ?Vi >P (E|C)P
~ (C)
~ . This is zero if, and only if,
P
P
~
~
~
~ +
~ = 0. The result follows.
j Vj < fij (C) >P (C)
j ?j < gi (C)hj (C) >P (C)
We use notation that < . >P (C)
~ is the expectation with respect to the probability distri~ on the causes. For example, < fij (C)
~ > ~ = P ~ P (C)f
~ ij (C).
~ Hence
bution P (C)
P (C)
C
~
~
the requirement that the matrix < fij (C) > ~ is invertible usually requires that P (C)
P (C)
is generic. See examples in sections (4.1,4.2). Convergence may still occur if the ma~ > ~ is non-invertible. Linear combinations of the weights will remained
trix < fij (C)
P (C)
fixed (in the directions of the zero eigenvectors of the matrix) and the remaining linear
co,mbinations will converge.
Additional conditions to ensure convergence to the fixed point, and to determine the convergence rate, can be found using Theorems 3,4,5 in [10].
4.1
Generalized RW class I
We now give prove a corollary of Theorem 1 which will enable us to obtain our first class
of generalized RW algorithms.
Corollary 1. A sufficient condition for generalized RW to have fixed points at ML esti~ = ?hi (C)h
~ j (C),
~ gi (C)
~ = hi (C)
~ ? i, j and the
mates of the model parameters is fij (C)
~ j (C)
~ > ~ is invertible. Moreover, convergence to the fixed point is
matrix < hi (C)h
P (C)
guaranteed.
Proof. Direct verification. Convergence to the fixed point follows from the gradient descent
nature of the algorithm, see equation (12).
These conditions define generalized RW class I (GRW-I) which is a natural extension of
basic Rescorla-Wagner (5):
X
X
~ j )2 , i = 1, ..., n (12)
~ j } = ? ? (E ?
~
hj (C)V
hj (C)V
?Vi = hi (C){E
?
?V
i
j
j
This GRW-I algorithm ia guaranteed to converge to the fixed point because it performs
stochastic steepest descent. This is essentially the Widrow-Huff algorithm [12,13].
To illustrate Corollary 1, we show the relationships between GRW-I and ML for three
different generative models: (i) the ?P model, (ii) the noisy-or model, and (iii) the most
~ for two causes. It is important to realize that these generative
general form of P (E|C)
models form a hierarchy and GRW-I algorithms for the later models will also perform ML
on the simpler ones.
1. The ?P model.
~ = C1 and h2 (C)
~ = C2 . Then equation (12) reduces to the basic
Set n = 2, h1 (C)
RW algorithm (5) with two weights V1 , V2 . By Corollary 1, we see that it performs ML
estimation for the ?P model (1). This rederives the known relationship between basic RW,
ML, and the ?P model [4,9].
< C1 >P (C)
< C1 C2 >P (C)
~
~
Observe that Corollary 1 requires that the matrix
< C1 C2 >P (C)
< C2 >P (C)
~
~
be invertible. This is equivalent to the genericity condition < C1 C2 >2P (C)
~ 6=<
C1 >P (C)
~ < C2 >P (C)
~ .
2. The Noisy-Or model.
~ = C1 , h2 (C)
~ = C2 , h3 (C)
~ = C1 C2 . Then Corollary 1 proves that
Set n = 3 with h1 (C)
the following algorithm will converge to estimate V1? = ?1 , V2? = ?2 and V3? = ??1 ?2
for the noisy-or model.
?V1 = C1 (E ? C1 V1 ? C2 V2 ? C1 C2 V3 ) = C1 (E ? V1 ? C2 V2 ? C2 V3 )
?V2 = C2 (E ? C1 V1 ? C2 V2 ? C1 C2 V3 ) = C2 (E ? C1 V1 ? V2 ? C1 V3 )
?V3 = C1 C2 (E ? C1 V1 ? C2 V2 ? C1 C2 V3 ) = C1 C2 (E ? V1 ? V2 ? V3 ).
(13)
This algorithm is a minor variant of basic RW. Observe that this has more weights (n = 3)
than the total number of causes. The first two weights V1 and V2 yield ?1 , ?2 while the
~ j (C)
~ > ~
third weight V3 gives a (redundant) estimate of ?1 ?2 . The matrix < hi (C)h
P (C)
has determinant (< C1 C2 > ? < C1 >)(< C1 C2 > ? < C2 >) < C1 C2 > and is
invertible provided < C1 >6= 0, 1, < C2 >6= 0, 1 and < C1 C2 >6=< C1 >< C2 >.
This rules out the special case in Cheng?s experiments [4,5] where C1 = 1 always, see
discussion in section (5).
It is known that basic RW is unable to do ML estimation for the noisy-or model if there are
only two weights [4,5,9,10]. The differences here is that three weights are used.
3. The general two-cause model.
~ for two causes. This can be written
Thirdly, we consider the most general model P (E|C)
in the form:
P (E = 1|C1 , C2 ) = ?1 + ?2 C1 + ?3 C2 + ?4 C1 C2 .
(14)
~ = 1, h2 (C)
~ = C1 , h3 (C)
~ = C2 , h4 (C)
~ = C1 C2 . Corollary 1
This corresponds to h1 (C)
gives us the most general algorithm:
?V1 = (E ? V1 ? C1 V2 ? C2 V3 ? C1 C2 V4 ) = (E ? V1 ? C1 V2 ? C2 V3 ? C1 C2 V4 )
?V2 = C1 (E ? V1 ? C1 V2 ? C2 V3 ? C1 C2 V4 ) = C1 (E ? V1 ? V2 ? C2 V3 ? C2 V4 )
?V3 = C2 (E ? V1 ? C1 V2 ? C2 V3 ? C1 C2 V4 ) = C2 (E ? V1 ? C1 V2 ? V3 ? C1 V4 )
?V4 = C1 C2 (E ? V1 ? C1 V2 ? C2 V3 ? C1 C2 V4 ) = C1 C2 (E ? V1 ? V2 ? V3 ? V4 ).
By Corollary 1, this algorithm will converge to V1? = ?1 , V2? = ?2 , V3? = ?3 , V4? = ?4 ,
~ j (C)
~ > ~ is
provided the matrix is invertible. The determinant of the matrix < hi (C)h
P (C)
< C1 C2 > (< C1 C2 > ? < C1 >)(< C1 C2 > ? < C2 >)(1? < C1 > ? < C2 >
+ < C1 C2 >). This will be zero for special cases, for example if C1 = 1 always.
~
It is important to realize that the most general GRW-I algorithm will converge if P (E|C)
is the ?P or the noisy-or model. For ?P it will converge to V1? = 0, V2? = ?1 , V3? =
?2 , V4? = 0. For noisy-or, it converges to V1? = 0, V2? = ?1 , V3? = ?2 , V4? = ??1 ?2 .
The learning system which implements the GRW-I algorithm will not know a priori
whether the data is generated by ?P , noisy-or, or the general model for P (E|C1 , C2 ).
It is therefore better to implement the most general algorithm because this works whatever
model generated the data.
~ will lead to different ways to parameterize the probability
Note: other functions {hi (C)}
~ They will correspond to different RW algorithms. But their basic
distribution P (E|C).
properties will be similar to those discussed in this section.
4.2
Generalized RW Class II
We can obtain a second class of generalized RW algorithms which perform ML estimation.
Corollary 2. A sufficient condition for RW to have unique fixed point at the ML estimate
~ is that fij (C)
~ = ?gi (C)h
~ j (C),
~ provided the matrix
of the generative model P (E|C)
~
~
< hi (C)hj (C) >P (C)
~ is invertible.
Proof. Direct verification.
Corollary 2 defines GRW-II to be of form:
~
?Vi = gi (C){E
?
X
~ j }.
hj (C)V
(15)
j
We illustrate GRW-II by applying it to the noisy-or model (2). It gives an algorithm very
similar to equation (6).
~ = C1 , h2 (C)
~ = C2 , h3 (C)
~ = C1 C2 and g1 (C)
~ = C1 (1 ? C2 ), g2 (C)
~ =
Set h1 (C)
~ = C1 C2 .
C2 (1 ? C1 ), g3 (C)
Corollary 2 yields the update rule:
?V1 = C1 (1 ? C2 ){E ? C1 V1 ? C2 V2 ? C1 C2 V3 } = C1 (1 ? C2 ){E ? V1 },
?V2 = C2 (1 ? C1 ){E ? C1 V1 ? C2 V2 ? C1 C2 V3 } = C2 (1 ? C1 ){E ? V2 },
?V3 = C1 C2 {E ? C1 V1 ? C2 V2 ? C1 C2 V3 } = C1 C2 {E ? V1 ? V2 ? V3 }.
(16)
~ j (C)
~ > ~ has determinant < C1 C2 > (< C1 > ? < C1 C2 >)(<
The matrix < hi (C)h
P (C)
~ The algorithm will converge
C2 > ? < C1 C2 >) and so is invertible for generic P (C).
?
?
?
to weights V1 = ?1 , V2 = ?2 , V3 = ??1 ?2 . If we change the model to ?P , then we get
convergence to V1? = ?1 , V2? = ?2 , V3? = 0.
Observe that the equations (16) are largely decoupled. In particular, the updates for V1 and
V2 do not depend on the third weight V3 . It is possible to remove the update equation for V3
~ = 0. The remaining update equations for V1 &V2 will converge to ?1 , ?2
by setting g3 (C)
for both the noisy-or and the ?P model.
These reduced update equations are identical to those given by equation (6) which were
~ j (C)
~ > ~ now has
proven to converge to ?1 , ?2 [10]. We note that the matrix < hi (C)h
P (C)
~ = 0) but this does not matter because it corresponds to
a zero eigenvalue (because g3 (C)
the third weight V3 . The matrix remains invertible if we restrict it to i, j = 1, 2.
A limitation of GRW-II algorithm of equation (16) is that it only updates the weights if
only one cause is active. So it would fail to explain effects such as blocking where both
causes are on for part of the stimuli (Dayan personal communication).
5
Non-generic, coordinate transformations, and non-linear RW
~ of causes. They
Our results have assumed genericity constraints on the distribution P (C)
usually correspond to cases where one cause is always present. We now briefly discuss
what happens when these constraints are violated. For simplicity, we concentrate on an
important special case.
Cheng?s PC theory [4,5] uses the noisy-or model for generating the data but cause C1 is
a background cause which is on all the time (i.e. C1 = 1 always). This implies that
< C2 >=< C1 C2 > and so we cannot apply RW algorithms (13), the most general
algorithm, or (16) because the matrix determinant will be zero in all three cases. Since
C1 = 1 we can drop it as a variable and re-express the noisy-or model as:
~ = ?1 + ?2 (1 ? ?1 )C2 .
P (E = 1|C)
(17)
Theorem 1 shows that we can define generalized RW algorithms to find ML estimates of
?1 and ?2 (1 ? ?1 ) (assuming ?1 6= 1). But, conversely, it is impossible to estimate ?2
directly by any linear generalized RW.
The problem is simply a matter of different coordinate systems. RW estimates the parameters of the generative model in a different coordinate system than the one used to specify
the model. There is a non-linear transformation between the coordinates systems relating
{?1 , ?2 } to {?1 , ?2 (1 ? ?1 )}. So RW can estimate the ML parameters provided we allow
for an additional non-linear transformation. From this perspective, the inability to RW to
perfrom ML estimation for Cheng?s model is merely an artifact. If we reparameterize the
~ = ?1 + ?
generative model to be P (E = 1|C)
? 2 C2 , where ?
? 2 = ?2 (1 ? ?1 ), then we can
design an RW to estimate {?1 , ?
? 2 }.
The non-linear transformation breaks down if ?1 = 1. In this case, the generative model
~ becomes independent of ?2 and so it is impossible to estimate it.
P (E|C)
But suppose we want to really estimate ?1 and ?2 directly (for Cheng?s model, the value of
?2 is the causal power and hence is a meaningful quantity [4,5]). To do this we first define
a linear RW to estimate ?1 and ?
? 2 = ?2 (1 ? ?1 ). The equations are:
V1t+1 = V1t + ?1 ?V1t , V2t+1 = V2t + ?2 ?V2t .
(18)
with < V1 >7? ?1 and < V2 >7? ?2 for large t. The fluctuations (variances) are scaled
by the parameters ?1 , ?2 and hence can be made arbitrarily small, see [10].
To estimate ?2 , we replace the variable V2 by a new variable V3 = V2 /(1 ? V1 ) which is
updated by a nonlinear equation (V1 is updated as before):
V3t+1 = V3t +
?V2t
V3t
t
?V
+
,
1 ? V1t 1
1 ? V1t
(19)
where we use V3 = V2 /(1?V1 ) to re-express ?V1 and ?V2 in terms of functions of V1 and
V3 . Provided the fluctuations are small, by controlling the size of the ??s, we can ensure
that V3 converges arbitrarily close to ?
? 2 /(1 ? ?1 ) = ?2 .
6
Conclusion
This paper shows that we can obtain linear generalizations of the Rescorla-Wagner algorithm which can learn the parameters of generative models by Maximum Likelihood. For
one class of RW generalizations we have only shown that the fixed points are unique and
correspond to ML estimates of the parameters of the generative models. But Theorems
3,4 & 5 of Yuille (2004) can be applied to determine convergence conditions. Convergence rates can be determined by these Theorems provided that the data is generated as
i.i.d. samples from the generative model. These theorems can also be used to obtain convergence results for piecewise i.i.d. samples as occurs in foreward and backward blocking
experiments.
These generalizations of Rescorla-Wagner require augmenting the number of weight variables. This was already proposed, on experimental grounds, so that new weights get created
if causes occur in conjunction, [2]. Note that this happens naturally in the algorithms presented (13, the most general algorithm,16) ? weights remain at zero until we get an event
C1 C2 = 1. It is straightforward to extend the analysis to models with conjunctions of many
causes. We conjecture that these generalizations converge to good approaximation to ML
estimates if we truncate the conjunction of causes at a fixed order.
Finally, many of our results have involved a genericity assumption on the distribution of
~ We have argued that when these assumptions are violated, for example in
causes P (C).
Cheng?s experiments, then generalized RW still performs ML estimation, but with a nonlinear transform. Alternatively we have shown how to define a nonlinear RW that estimates
the parameters directly.
Acknowledgement
I acknowledge helpful conversations with Peter Dayan, Rich Shiffrin, and Josh Tennenbaum. I thank Aaron Courville for describing augmented Rescorla-Wagner. I thank the
W.M. Keck Foundation for support and NSF grant 0413214.
References
[1]. R.A. Rescorla and A.R. Wagner. ?A Theory of Pavlovian Conditioning?. In A.H.
Black andW.F. Prokasy, eds. Classical Conditioning II: Current Research and Theory.
New York. Appleton-Century-Crofts, pp 64-99. 1972.
[2] R.A. Rescorla. Journal of Comparative and Physiological Psychology. 79, 307. 1972.
[3]. B. A. Spellman. ?Conditioning Causality?. In D.R. Shanks, K.J. Holyoak, and D.L.
Medin, (eds). Causal Learning: The Psychology of Learning and Motivation, Vol. 34.
San Diego, California. Academic Press. pp 167-206. 1996.
[4]. P. Cheng. ?From Covariance to Causation: A Causal Power Theory?. Psychological
Review, 104, pp 367-405. 1997.
[5]. M. Buehner and P. Cheng. ?Causal Induction: The power PC theory versus the
Rescorla-Wagner theory?. In Proceedings of the 19th Annual Conference of the Cognitive Science Society?. 1997.
[6]. J.B. Tenenbaum and T.L. Griffiths. ?Structure Learning in Human Causal Induction?.
Advances in Neural Information Processing Systems 12. MIT Press. 2001.
[7]. D. Danks, T.L. Griffiths, J.B. Tenenbaum. ?Dynamical Causal Learning?. Advances in
Neural Information Processing Systems 14. 2003.
[8] A.C. Courville, N.D. Dew, and D.S. Touretsky. ?Similarity and discrimination in classical conditioning?. NIPS. 2004.
[9]. D. Danks. ?Equilibria of the Rescorla-Wagner Model?. Journal of Mathematical
Psychology. Vol. 47, pp 109-121. 2003.
[10] A.L. Yuille. ?The Rescorla-Wagner algorithm and Maximum Likelihood estimation
of causal parameters?. NIPS. 2004.
[11]. P. Dayan and S. Kakade. ?Explaining away in weight space?. In Advances in Neural
Information Processing Systems 13. 2001.
[12] B. Widrow and M.E. Hoff. ?Adapting Switching Circuits?. 1960 IRE WESCON Conv.
Record., Part 4, pp 96-104. 1960.
[13] A.G. Barto and R.S. Sutton. ?Time-derivative Models of Pavlovian Conditioning?.
In Learning and Computational Neuroscience: Foundations of Adaptive Networks. M.
Gabriel and J. Moore (eds.). pp 497-537. MIT Press. Cambridge, MA. 1990.
| 2809 |@word determinant:4 briefly:1 holyoak:1 covariance:1 current:1 conjunctive:1 written:1 realize:2 remove:1 drop:1 update:11 discrimination:1 generative:18 steepest:1 record:1 ire:1 simpler:1 mathematical:1 c2:101 direct:3 h4:2 consists:2 prove:1 expected:1 v1t:8 becomes:1 provided:9 distri:1 notation:1 moreover:2 circuit:1 conv:1 what:1 finding:1 transformation:5 guarantee:1 esti:1 scaled:1 whatever:1 grant:1 before:1 switching:1 sutton:1 v2t:7 fluctuation:4 black:1 quantified:1 conversely:1 co:1 medin:1 unique:3 practice:1 implement:2 binaryvalued:1 adapting:1 griffith:2 get:3 cannot:1 close:1 put:1 applying:1 impossible:2 equivalent:1 straightforward:1 attention:1 vit:3 simplicity:1 rule:7 century:1 coordinate:5 updated:2 hierarchy:1 suppose:1 controlling:1 diego:1 us:1 blocking:4 observed:1 parameterize:1 calculate:1 personal:1 depend:1 solving:1 yuille:4 basis:4 represented:1 describe:1 valued:1 relax:1 ability:1 statistic:1 gi:6 g1:1 transform:1 noisy:16 eigenvalue:1 rescorla:34 shiffrin:1 los:2 convergence:18 requirement:1 keck:1 generating:1 comparative:1 converges:2 depending:1 widrow:2 illustrate:2 augmenting:3 stat:1 ij:1 minor:1 h3:4 involves:1 implies:1 direction:1 concentrate:1 fij:10 stochastic:1 human:1 enable:1 require:1 argued:1 generalization:7 really:1 extension:1 ground:1 equilibrium:1 estimation:15 mit:2 danks:2 always:4 pn:1 hj:5 sion:1 barto:1 conjunction:5 corollary:11 likelihood:15 greatly:1 helpful:1 dayan:3 arg:3 priori:1 special:4 hoff:1 equal:1 identical:1 stimulus:1 piecewise:3 causation:1 pc:2 decoupled:1 unless:1 re:2 causal:14 theoretical:1 psychological:1 probabilistic:1 v4:12 v3t:3 invertible:10 satisfied:1 cognitive:1 derivative:1 account:1 summarized:1 matter:2 satisfy:1 caused:1 vi:6 later:1 break:2 h1:5 analyze:1 bution:1 accuracy:1 variance:1 largely:1 correspond:4 yield:2 iid:1 explain:1 ed:3 pp:6 involved:1 naturally:1 proof:3 conversation:1 sophisticated:1 follow:2 specify:2 until:1 sketch:1 nonlinear:7 defines:1 artifact:1 effect:1 analytically:1 hence:3 moore:1 deal:1 generalized:16 demonstrate:1 performs:3 reasoning:1 recently:1 common:1 conditioning:5 thirdly:1 extend:2 discussed:2 relating:2 buehner:1 appleton:1 cambridge:1 similarity:1 showed:1 perspective:1 binary:2 arbitrarily:2 additional:2 dew:1 determine:3 converge:12 v3:35 redundant:1 ii:7 sound:1 reduces:1 alan:1 adapt:2 academic:1 offer:1 variant:3 basic:11 essentially:1 expectation:2 represent:2 c1:103 background:1 want:1 huff:1 iii:2 easy:1 affect:1 psychology:3 restrict:1 angeles:2 whether:1 peter:1 york:1 cause:23 gabriel:1 involve:1 eigenvectors:1 tenenbaum:2 rw:26 reduced:1 nsf:1 estimated:1 neuroscience:1 discrete:1 promise:1 vol:2 express:2 backward:2 v1:44 merely:1 throughout:1 bound:1 hi:12 pay:1 guaranteed:2 shank:1 courville:2 cheng:8 annual:1 occur:3 constraint:2 ucla:1 generates:1 min:2 reparameterize:1 pavlovian:2 conjecture:1 department:1 truncate:1 combination:1 describes:1 remain:2 kakade:1 g3:3 happens:2 equation:12 remains:1 discus:1 describing:1 fail:1 know:1 studying:1 apply:1 observe:4 v2:43 generic:6 away:1 alternative:1 assumes:2 remaining:2 ensure:3 include:1 prof:1 classical:2 society:1 already:1 quantity:1 occurs:1 gradient:1 unable:1 thank:2 induction:2 assuming:1 touretsky:1 relationship:4 andw:1 design:1 unknown:1 perform:6 mate:1 acknowledge:1 descent:2 communication:1 precise:1 arbitrary:1 connection:1 california:2 nip:2 able:1 usually:2 dynamical:1 summarize:1 including:1 max:1 power:5 event:2 ia:1 treated:1 natural:1 egi:1 representing:1 spellman:1 created:1 review:1 acknowledgement:1 generation:1 limitation:1 proven:1 versus:1 ingredient:2 h2:5 foundation:2 sufficient:4 verification:2 summary:1 allow:1 understand:1 explaining:1 wagner:32 calculated:1 rich:1 forward:1 made:1 adaptive:1 san:1 prokasy:1 ml:22 active:1 wescon:1 assumed:1 alternatively:1 iterative:1 learn:2 nature:1 ca:1 ignoring:1 vj:2 motivation:1 augmented:3 causality:1 third:3 croft:1 down:2 theorem:7 remained:1 physiological:1 genericity:7 simply:1 josh:1 trix:1 g2:1 corresponds:3 ma:2 replace:1 change:1 determined:1 except:1 total:1 experimental:2 meaningful:1 aaron:1 support:1 inability:1 violated:3 |
1,992 | 281 | An Efficient Implementation of the Back-propagation Algorithm
A n Efficient Implementation of
the Back-propagation Algorithm on
the Connection Machine CM-2
Xiru Zhang!
Michael Mckenna
Jill P. Mesirov
David L. Waltz
Thinking Machines Corporation
245 First Street, Cambridge, MA 02142-1214
ABSTRACT
In this paper, we present a novel implementation of the widely used
Back-propagation neural net learning algorithm on the Connection
Machine CM-2 - a general purpose, massively parallel computer
with a hypercube topology. This implementation runs at about 180
million interconnections per second (IPS) on a 64K processor CM2. The main interprocessor communication operation used is 2D
nearest neighbor communication. The techniques developed here
can be easily extended to implement other algorithms for layered
neural nets on the CM-2, or on other massively parallel computers
which have 2D or higher degree connections among their processors.
1
Introduction
High-speed simulation of large artificial neural nets has become an important tool
for solving real world problems and for studying the dynamic behavior of large
populations of interconnected processing elements [3, 2]. This work is intended to
provide such a simulation tool for a widely used neural net learning algorithm - the
Back-propagation (BP) algorithm.[7]
The hardware we have used is the Connection Machine? CM-2.2 On a 64K processor CM-2 our implementation runs at 40 million Weight Update Per Second
1 This author is also a graduate student at Computer Science Department, Brandeis University,
Waltham, MA 02254-9110.
2 Connection Machine is a registered trademark of Thinking Machines Corporation.
801
802
Zhang, Mckenna, Mesirov and Waltz
(WUPS)3 for training, or 180 million Interconnection Per Second (IPS) for forwardpass, where IPS is defined in the DARPA NEURAL NETWORK STUDY [2] as "the
number of multiply-and-add operations that can be performed in a second" [on a
Back-propagation network). We believe that the techniques developed here can be
easily extended to implement other algorithms for layered neural nets on the CM-2,
or other massively parallel machines which have 2D or higher degree connections
among their processors.
2
The Connection Machine
The Connection Machine CM-2 is a massively parallel computer with up to 65,536
processors. Each processor has a single-bit processing unit and 64K or 256K bits
of local RAM. The processors run in SIMD mode. They are connected in an ncube topology, which permits highly efficient n dimensional grid communications.
The system software also provides scan and spread operations - e.g., when n?m
processors are connected as an n x m 2D grid, the summation (product, max,
etc.) of a "parallel variable" value in all the processors on a row of the grid 4 takes
only O(logm) time. It is possible to turn off any subset of the processors so that
instructions will only be performed by those processors that are currently active.
On the CM-2, every 32 processors share a floating point processing unit; and a 32
bit number can be stored across 32 processors (Le., one bit per processor). These
32 processors can each access this 32-bit number as if it were stored in its own
memory. This is a way of sharing data among processors locally. The CM-2 uses
a conventional computer such as a SUN-4, VAX or Symbolics Lisp Machine as a
front-end machine. Parallel extensions to the familiar programming languages LISP,
C, and FORTRAN, via the front-end, allow the user to program the Connection
Machine and the front-end system.
3
The Back-propagation Algorithm
The Back-propagation [7] algorithm works on layered, feed-forward networks (BP
net for short in the following discussion), where the processing units are arranged in
layers - there are an input layer, an output layer, and one or more "hidden layers"
(layers between the input and output layers). A BP net computes its output in
the following fashion: first an input pattern is set as the output of the units at the
input layer; then one layer at a time, from the input to hidden to output layer,
the units compute their outputs by applying an activation function to the weighted
sum of their inputs (which are the outputs of the unit at the lower layer(s) that are
connected to them}. The weights come from the links between the units.
The Back-propagation algorithm "trains" a BP net by adjusting the link weights
of the net using a set of "training examples." Each training example consists of
3 This includes the time required to read in the input pattern, propagate activation forward
through the network, read in the ideal output pattern, propagate the error signal backward through
the network, compute the weight changes, and change the weights.
t
That is, to add together one value from each processor on a row of the grid and distribute
the sum into all the processors on the same row .
An Efficient Implementation or the Back-propagation Algorithm
Output Layer
Hidden Layer
Input Layer
o
? ? ?
J
? ? ?
m-1
Figure 1: A 3-layer, fully-connected Back-propagation network that has the same number (m) of nodes at each layer.
an input pattern and an ideal output pattern that the user wants the network to
produce for that input. The weights are adjusted based on the difference between
the ideal output and the actual output of the net. This can be seen as a gradient
descen t process in the weight space.
After the training is done, the BP net can be applied to inputs that are not in the
set of training examples. For a new input pattern IP, the network tends to produce
an output similar to the training example whose input is similar to IP. This can be
used for interpolation, approximation, or generalization from examples depending
on the goal of the user [4].
4
The Implementation
In this section, we explain our implementation by presenting a simple example a three-layer fully-connected BP network that has the same number of nodes at
each layer. It is straightforward to extend it to general cases. For a more detailed
discussion, see reference [8].
4.1
A Simple Case
Figure 1 shows a fully-connected 3-layer BP network with m nodes on each layer.
In the following discussion, we will use N i ,; to denote the jth node (from the left)
on layer i, i E {O, 1, 2}, j E {O, 1, ... , m - I}; ~,{ is the weight of the link from
node Nk,h to node Ni,j, and bi ,; is the error at node N i ,;.
First, assume we have exactly m processors. We store a "column" of the network
in each processor. That is, processor j contains nodes No,j, N1,j and N 2 ,j. It also
contains the weights of the links going into Nl,j and N2 ,; (i.e., W~"t and W{,t for
803
804
Zhang, Mckenna, Mesirov and Waltz
.........
Link
Weigh ts
W 2?k
'.1 '5
Link
Weigh ts
W ,?k
0.1 '5
#
~~ ?~
1:1
-
JIII..._
...... ......
....
098
{???- ???@~ -~...
={???- ??? -.G><E)
?~ ?A
......
?
,,,,~-
Output Nodes
......
......
:
?
/
Hidden Nodes
~-
'lr.t~#m{ ~
f-
I-
-
Input Nodes
......
Multiply-accum ulate-rotate
Figure 2: The layout of the example network.
k E {o, 1, ... , m - I}). See Figure 2. The Back-propagation algorithm consists
of three steps: (1) forward pass to compute the network output; (2) backward
propagation to compute the errors at each node; and (3) weight update to adjust
the weights based on the errors. These steps are implemented as follows:
4.1.1
Forward Pass: Output(Ni ?j
)
= F(2:;;';ol Wii~l.k ?Output(Ni _ 1 ?k ))
We implement forward pass as follows:
1. Set the input node values; there is one input node per processor.
2. In each processor, multiply the input node value by the link weight between the
input node and the hidden node that is in the same processor; then accumulate
the product in the hidden node.
3. Rotate the input node values - each processor sends its input node value to
its nearest left neighbor processor, the leftmost processor sends its value to
the rightmost processor; i.e., do a left-circular-shift.
4. Repeat the multiply-accumulate-rotate cycles in the above two steps (2-3) m
times; every hidden node N 1 .j will then contain 2:;;;01W~!k ?Output(NO.k)'
Now apply the activation function F to that sum. (See Figure 2.)
5. Repeat steps 2-4 for the output layer, using the hidden layer as the input.
An Efficient Implementation of the Back-propagation Algorithm
4.1.2
Backward Propagation
For the output layer, 62 ,k, the error at each node N 2 ,k, is computed by
62 ,k
= Output(N ,k) . (1 2
Output(N2 ,k)) . (Target(N 2 ,k) - Output(N2 ,k)),
where Target(N 2,k) is the ideal output for node N 2,k. This error can be computed
in place, i.e., no inter-processor communication is needed. For the hidden layer,
61,; = Output(N1 ,;) ? (1 - Output (N 1 ,; )) ? E:=-ol w;,t .62 ,k
To compute E:;OI w;,t . 62 ,k for the hidden nodes, we
perform a multiplyaccumulate-rotate operation similar to the forward pass, but from the top down.
Notice that the weights between a hidden node and the output nodes are in different processors. So, instead of rotating 62 ,k 's at the output layer, we rotate the partial
sum of products for the hidden nodes: at the beginning every hidden node N 1 ,j has
an accumulator A; with initial value 0 in processor j. We do a left-circular-shift
on the Aj's. When Aj moves to processor k, we set Aj ~ Aj + W 12,jk ? 62,k. After
=
m rotations, Aj will return to processor j
4.1.3
Weight Update:
~W~:{ =
T}.
and its value will be
E:=-OI W 12,jk ? 62 ,k.
6i ,j .Output(Nk,h)
~ W~:{ is the weight increment for W~:{, T} is the "learning rate" and 6i,i is the error
for node Ni,;, which is computed in the backward propagation step and is stored in
processor j. The weight update step is done as follows:
1. In each processor j, for the weights between the input layer and hidden layer,
1 .
1 .
compute weight update ~Wo,'~
T}. 61 ,j . Output(No,k),S and add ~Wo,'~ to
1 ,j
w.O,k
=
.6
,
2. Rotate the input node values as in step 3 of the forward pass.
3. Repeat the above two steps m times, until all the weights between the input
layer and the hidden layer are updated.
4. Do the above for weights between the hidden layer and the output layer also.
We can see that the basic operation is the same for all three steps of the Backpropagation algorithm, i.e., multiply-accumulate-rotate. On the CM-2, multiply,
add (for accumulate) and circular-shift (for rotate) take roughly the same amount
of time, independent of the size of the machine. So the CM-2 spends only about
1/3 of its total time doing communication in our implementation.
=
6 Initially k
j, but the input node values will be rotated around in later steps, so k '# j in
general.
6
is in the sa.m.e processor as ~ W~"t all the weights going into node 1 ,] are in processor
W;"t
-
N
W::t
j. Also we can accumulate ~ W~:t for several training patterns instead of updating
every
time. We can also keep the previous weight change and add a "momentum" term here. (Our
implementation actually does all these. They are omitted here to simplify the explanation of the
basic ideas.)
80S
806
Zhang, Mckenna, Mesirov and Waltz
4.2
Replication of Networks
Usually, there are more processors on the CM-2 than the width of a BP network.
Suppose the network width is m and there are n?m processors; then we make n copies
of the network on the CM-2, and do the forwa.rd pass and backward propagation
for different training patterns on each copy of the network. For the weight update
step, we can sum up the weight changes from different copies of the network (i.e.
from different training patterns), then update the weights in all the copies by this
sum. This is equivalent to updating the weights after n training patterns on a single
copy of the BP network.
On the CM-2, every 32 processors can share the same set of data (see section 2).
We make use of this feature and store the BP network weights across sets of 32
processors. Thus each processor only needs to allocate one bit for each weight.
Also, since the weight changes from different training patterns are additive, there
is no need to add them up in advance - each copy of the network can update (add
to) the weights separately, as long as no two or more copies of the network update
the same weight at the same time. (Our implementation guarantees that no such
weight update conflict can occur.) See Figure 3.
We call the 32 copies of the network that share the same set of weights a block.
When the number of copies n > 32, say n = 32 . q, then there will be q blocks
on the CM-2. We need to sum up the weight changes from different blocks before
updating the weights in each block. This summation takes a very small portion of
the total running time (much less than 1%). So the time increase can usually be
ignored when there is more than one block. 7 Thus, the implementation speeds up
essentially linearly as the number of processors increases.
5
An Example: Character Image Recovery
In this example, a character, such as A, is encoded as a 16 x 16 pixel array. A
3-layer fully-connected network with 256 input nodes, 128 hidden nodes and 256
output nodes is trained with 64 character pixel arrays, each of which is used both
as the input pattern and the ideal output pattern. After the training is done
(maximum_error < 0.15),8 some noisy character images are fed into the network.
The network is then used to remove the noise (to recover the images). We can also
use the network recursively - to feed the network output back as the input.
Figure 4a shows the ideal outputs (odd columns) and the actual outputs (even
columns) of the network after the training. Figure 4b shows corrupted character
image inputs (odd columns) and the recovered images (even columns). The corrupted inputs have 30% noise, i.e., 30% of the pixels take random values in each
image. We can see that most of the characters are recovered.
7The summation is done using the scan and spread operations (see section 2), so its time
increases only logarithmically in proportion to the number of blocks. Usually there are only a few
blocks, thus we could use the nearest neighbor communication here instead without much loss of
performance.
8 This training took about 400 cycles.
An Efficient Implementation of the Back-propagation Algorithm
Parallel weight-update
{\
8
,.
I:'I'J
0
???
-
0
~-
} Shared
weights
~
~ -Output Nodes
(;!IiI
} Shared
we igh ts
(.
loS
-??
0
0
0
-
,...t
lUI
,
Y.:II
--:-Input Nodes
~
'\.
-
v
m
-
'/
,
Network N
,,
\
Network 2
\
Network 1
\
Figure 3: Replication of a BP network and parallel update of network weights. In the
weigbt update step, the nodes in each copy of the BP network loop through the weights
going into them in the following fashion: in the first loop, Network 1 updates the first
weight, Network 2 updates the second weight ... Network N updates the Nth weight; in
general, in the Jth loop, Network I updates [M od(I + J, N)]th weight . In this way, it is
guaranteed that no two networks update the same weight at the same time. When the
total number of weights going into each node is greater than N, we repeat the above loop .
AAaaBBbbTTttUUuu
CGcoDDddVVvvXXXX
EEeeFFffYYyyZZzz
GG9gHHhh00112233
I I i l' KKkk44556677
LLII NNnh8899??
OOOOPRPP??$$AA&&
RRrrSSss**++==-"':'
(a)
(b)
Figure 4: (a) Ideal outputs (in odd columns) and the actual after-training outputs (in
even columns) of a network with 256 input nodes, 128 hidden nodes and 256 output nodes
trained with character images. (b) Noisy inputs (in odd columns) and the corresponding
outputs ("cleaned-up" images) produced by the network.
807
808
Zhang, Mckenna, Mesirov and Waltz
Computer
BP performance (IPS)
CM-2
Cray X-MP
WARP (10)
ANZA plus
TRW MK V (16)
Butterfly (64)
SAle SIGMA-l
TIOdyessy
Convex C-1
VAX 8600
SUN 3
Symbolics 3600
180 M
50 M
17 M (WUPS)
10 M
10 M
8M
5-8 M
5M
3.6 M
2M
250 K
35 K
Table 1: Comparison of BP implementations on different computers.
In this example, we used a 4K processor CM-2. The BP network had 256 x 128 +
128x 256 65,536 weights. We made 64 copies of the network on the CM-2, so there
were 2 blocks. One weight update cycle9 took 1.66 seconds. Thus the performance
is: (65,536 x 64) -;- 1.66 ::::.:: 2,526,689 weight update per second (WUPS). Within
the 1.66 seconds, the communication between the two blocks took 0.0023 seconds.
If we run a network of the same size on a 64K processor CM_2,10 there will be
32 blocks, and the inter-block communication will take 0.0023 x I~ogg 322 = 0.0115
second. 11 And the overall performance will be:
=
(16 x 65,536 x 64) -;- (1.66 + 0.0115)
= 40,148,888 WUPS
Forward-pass took 22% of the total time. Thus if we ran the forward pass alone,
the speed would be 40,148,888 -;- 0.22::::.:: 182,494,940 IPS.
6
Comparison With Other Implementations
This implementation of the Back-propagation algorithm on the CM-2 runs much
more efficiently than previous CM implementations (e.g., see [1], [6]). Table 1 lists
the speeds of Back-propagation on different machines (obtained from reference [2]
and [5]).
See footnote 3 for definition.
Assume we have enough training patterns to fill up the CM-2.
11 We use scan and spread operations here, so the time used increases logrithmatically.
9
10
An Efficient Implementation of the Back-propagation Algorithm
7
Summary
In this paper, we have shown an example of efficient implementation of neural net
algorithms on the Connection Machine CM-2. We used Back-propagation because it
is the most widely implemented, and many researchers have used it as a benchmark.
The techniques developed here can be easily adapted to implement other algorithms
on layered neural nets.
The main communication operation used in this work is the 2D grid nearest neighbor
communication. The facility for a group of processors on the CM-2 to share data is
important in reducing the amount of space required to store network weights and
the communication between different copies of the network. These points should be
kept in mind when one tries to use the techniques described here on other machines.
The main lesson we learned from this work is that to implement an algorithm
efficiently on a massively parallel machine often requires re-thinking of the algorithm
to explore the parallel nature of the algorithm, rather than just a straightforward
translation of serial implementations.
Acknowledgement
Many thanks to Alex Singer, who read several drafts of this paper and helped
improve it. Lennart J ohnsson helped us solve a critical problem. Discussions with
other members of the Mathematical and Computational Sciences Group at Thinking
Machines Corporation also helped in many ways.
References
[1] Louis G. Ceci, Patrick Lynn, and Phillip E. Gardner. Efficient Distribution of BackPropagation Models on Parallel Architectures. Tech. Report CU-CS-409-88, Dept. of
Computer Science, University of Colorado, September 1988.
[2] MIT Lincoln Laboratory. Darpa Neural Network Study. Final Report, July 1988.
[3] Special Issue on Artificial Neural Systems. IEEE Computer, March 1988.
[4] Tomaso Poggio and Federico Girosi. A Theory of Networks for Approximation and
Learning. A.I.Memo 1140, MIT AI Lab, July 1989.
[5] Dean A. Pomerleau, George L. Gusciora David S. Touretzky, and H. T. Kung. Neural
Network Simulation at Warp Speed: How We Got 17 Million Connections Per Second.
In IEEE Int. Conf. on Neural Network&, July 1988. San Diego, CA.
[6] Charles R. Rosenberg and Guy Blelloch. An Implementation of Network Learning on
the Connection Machine. In Proceeding& of the Tenth International Joint Conference
on Artificial Intelligence, Milan, Italy, 1987.
[7] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations
by error propagation. In Parallel Di&tributed Proceuing, chapter 8. MIT Press, 1986.
[8] Xiru Zhang, Michael Mckenna, Jill P. Mesirov, and David L. Waltz. An Efficient
Implementation of The Back-Propagation Algorithm On the Connection Machine CM2. Technical Report RL-89-1, Thinking Machines Corp., 245 First St. Cambridge, MA
02114, 1989.
809
| 281 |@word cu:1 proportion:1 cm2:2 instruction:1 simulation:3 propagate:2 recursively:1 initial:1 contains:2 rightmost:1 recovered:2 od:1 activation:3 additive:1 girosi:1 remove:1 update:20 alone:1 intelligence:1 beginning:1 short:1 lr:1 provides:1 draft:1 node:42 llii:1 zhang:6 mathematical:1 interprocessor:1 become:1 replication:2 consists:2 cray:1 inter:2 roughly:1 tomaso:1 behavior:1 ol:2 actual:3 cm:23 spends:1 developed:3 corporation:3 guarantee:1 every:5 exactly:1 sale:1 unit:7 louis:1 before:1 local:1 tends:1 proceuing:1 tributed:1 interpolation:1 plus:1 graduate:1 bi:1 accumulator:1 block:11 implement:5 backpropagation:2 got:1 layered:4 applying:1 conventional:1 equivalent:1 dean:1 straightforward:2 layout:1 williams:1 convex:1 recovery:1 array:2 fill:1 population:1 increment:1 updated:1 target:2 suppose:1 diego:1 user:3 colorado:1 programming:1 us:1 wups:4 element:1 logarithmically:1 rumelhart:1 jk:2 updating:3 connected:7 sun:2 cycle:2 ran:1 weigh:2 dynamic:1 trained:2 solving:1 easily:3 darpa:2 joint:1 chapter:1 train:1 artificial:3 whose:1 encoded:1 widely:3 solve:1 say:1 interconnection:2 federico:1 noisy:2 ip:7 butterfly:1 final:1 net:13 took:4 mesirov:6 interconnected:1 product:3 loop:4 jiii:1 lincoln:1 milan:1 los:1 produce:2 rotated:1 depending:1 nearest:4 odd:4 sa:1 implemented:2 c:1 come:1 waltham:1 generalization:1 blelloch:1 summation:3 adjusted:1 extension:1 around:1 omitted:1 purpose:1 currently:1 tool:2 weighted:1 mit:3 rather:1 rosenberg:1 tech:1 initially:1 hidden:18 going:4 pixel:3 overall:1 among:3 issue:1 special:1 simd:1 thinking:5 report:3 simplify:1 few:1 floating:1 familiar:1 intended:1 logm:1 n1:2 highly:1 circular:3 multiply:6 adjust:1 nl:1 waltz:6 partial:1 poggio:1 rotating:1 re:1 mk:1 column:8 subset:1 front:3 stored:3 corrupted:2 thanks:1 st:1 international:1 off:1 michael:2 together:1 guy:1 conf:1 return:1 distribute:1 student:1 includes:1 int:1 mp:1 performed:2 later:1 try:1 helped:3 lab:1 doing:1 portion:1 recover:1 parallel:12 oi:2 ni:4 who:1 efficiently:2 lesson:1 produced:1 researcher:1 processor:46 explain:1 footnote:1 descen:1 touretzky:1 sharing:1 definition:1 di:1 adjusting:1 actually:1 back:19 trw:1 feed:2 higher:2 arranged:1 done:4 just:1 until:1 lennart:1 propagation:23 mode:1 aj:5 believe:1 phillip:1 contain:1 facility:1 read:3 laboratory:1 width:2 leftmost:1 presenting:1 image:8 novel:1 charles:1 rotation:1 rl:1 million:4 extend:1 accumulate:5 gusciora:1 cambridge:2 ai:1 rd:1 grid:5 language:1 had:1 access:1 etc:1 add:7 patrick:1 own:1 italy:1 massively:5 store:3 corp:1 seen:1 greater:1 george:1 signal:1 ii:1 july:3 technical:1 long:1 serial:1 basic:2 essentially:1 want:1 separately:1 sends:2 member:1 lisp:2 call:1 ideal:7 iii:1 enough:1 architecture:1 topology:2 idea:1 shift:3 allocate:1 wo:2 ignored:1 detailed:1 amount:2 locally:1 hardware:1 notice:1 per:7 group:2 tenth:1 kept:1 backward:5 ram:1 sum:7 run:5 place:1 bit:6 layer:32 guaranteed:1 adapted:1 occur:1 alex:1 bp:15 software:1 speed:5 department:1 march:1 across:2 character:7 ceci:1 turn:1 singer:1 fortran:1 xiru:2 mind:1 fed:1 needed:1 end:3 studying:1 operation:8 wii:1 permit:1 apply:1 symbolics:2 top:1 running:1 hypercube:1 move:1 september:1 gradient:1 link:7 street:1 lynn:1 sigma:1 memo:1 implementation:23 pomerleau:1 perform:1 benchmark:1 t:3 extended:2 communication:11 hinton:1 david:3 required:2 cleaned:1 connection:13 conflict:1 registered:1 learned:1 usually:3 pattern:14 ncube:1 program:1 max:1 memory:1 explanation:1 critical:1 nth:1 jill:2 improve:1 vax:2 gardner:1 acknowledgement:1 fully:4 loss:1 degree:2 share:4 translation:1 row:3 summary:1 repeat:4 copy:12 jth:2 allow:1 warp:2 neighbor:4 world:1 computes:1 author:1 forward:9 made:1 san:1 brandeis:1 keep:1 active:1 accum:1 table:2 nature:1 ca:1 main:3 spread:3 linearly:1 noise:2 n2:3 fashion:2 momentum:1 down:1 mckenna:6 trademark:1 list:1 nk:2 explore:1 aa:1 ma:3 goal:1 shared:2 change:6 lui:1 reducing:1 total:4 pas:8 internal:1 scan:3 rotate:8 kung:1 dept:1 |
1,993 | 2,810 | The Curse of Highly Variable Functions for
Local Kernel Machines
Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux
Dept. IRO, Universit?e de Montr?eal
P.O. Box 6128, Downtown Branch, Montreal, H3C 3J7, Qc, Canada
{bengioy,delallea,lerouxni}@iro.umontreal.ca
Abstract
We present a series of theoretical arguments supporting the claim that a
large class of modern learning algorithms that rely solely on the smoothness prior ? with similarity between examples expressed with a local
kernel ? are sensitive to the curse of dimensionality, or more precisely
to the variability of the target. Our discussion covers supervised, semisupervised and unsupervised learning algorithms. These algorithms are
found to be local in the sense that crucial properties of the learned function at x depend mostly on the neighbors of x in the training set. This
makes them sensitive to the curse of dimensionality, well studied for
classical non-parametric statistical learning. We show in the case of the
Gaussian kernel that when the function to be learned has many variations,
these algorithms require a number of training examples proportional to
the number of variations, which could be large even though there may exist short descriptions of the target function, i.e. their Kolmogorov complexity may be low. This suggests that there exist non-local learning
algorithms that at least have the potential to learn about such structured
but apparently complex functions (because locally they have many variations), while not using very specific prior domain knowledge.
1
Introduction
A very large fraction of the recent work in statistical machine learning has been focused
on non-parametric learning algorithms which rely solely, explicitly or implicitely, on the
smoothness prior, which says that we prefer as solution functions f such that when x ? y,
f (x) ? f (y). Additional prior knowledge is expressed by choosing the space of the
data and the particular notion of similarity between examples (typically expressed as a
kernel function). This class of learning algorithms therefore includes most of the kernel machine algorithms (Sch?olkopf, Burges and Smola, 1999), such as Support Vector
Machines (SVMs) (Boser, Guyon and Vapnik, 1992; Cortes and Vapnik, 1995) or Gaussian processes (Williams and Rasmussen, 1996), but also unsupervised learning algorithms
that attempt to capture the manifold structure of the data, such as Locally Linear Embedding (Roweis and Saul, 2000), Isomap (Tenenbaum, de Silva and Langford, 2000), kernel PCA (Sch?olkopf, Smola and M?uller, 1998), Laplacian Eigenmaps (Belkin and Niyogi,
2003), Manifold Charting (Brand, 2003), and spectral clustering algorithms (see (Weiss,
1999) for a review). More recently, there has also been much interest in non-parametric
semi-supervised learning algorithms, such as (Zhu, Ghahramani and Lafferty, 2003; Zhou
et al., 2004; Belkin, Matveeva and Niyogi, 2004; Delalleau, Bengio and Le Roux, 2005),
which also fall in this category, and share many ideas with manifold learning algorithms.
Since this is a very large class of algorithms and it is attracting so much attention, it is
worthwhile to investigate its limitations, and this is the main goal of this paper. Since
these methods share many characteristics with classical non-parametric statistical learning
algorithms (such as the k-nearest neighbors and the Parzen windows regression and density
estimation algorithms (Duda and Hart, 1973)), which have been shown to suffer from the
so-called curse of dimensionality, it is logical to investigate the following question: to what
extent do these modern kernel methods suffer from a similar problem?
In this paper, we focus on algorithms in which the learned function is expressed in terms
of a linear combination of kernel functions applied on the training examples:
n
X
f (x) = b +
?i KD (x, xi )
(1)
i=1
where optionally a bias term b is added, D = {z1 , . . . , zn } are training examples (zi = xi
for unsupervised learning, zi = (xi , yi ) for supervised learning, and yi can take a special
?missing? value for semi-supervised learning). The ?i ?s are scalars chosen by the learning
algorithm using D, and KD (?, ?) is the kernel function, a symmetric function (sometimes
expected to be positive definite), which may be chosen by taking into account all the x i ?s.
A typical kernel function is the Gaussian kernel,
2
1
K? (u, v) = e? ?2 ||u?v|| ,
(2)
with the width ? controlling how local the kernel is. See (Bengio et al., 2004) to see that
LLE, Isomap, Laplacian eigenmaps and other spectral manifold learning algorithms such
as spectral clustering can be generalized to be written as in eq. 1 for a test point x.
One obtains consistency of classical non-parametric estimators by appropriately varying
the hyper-parameter that controls the locality of the estimator as n increases. Basically, the
kernel should be allowed to become more and more local, so that statistical bias goes to
zero, but the ?effective number of examples? involved in the estimator at x (equal to k for
the k-nearest neighbor estimator) should increase as n increases, so that statistical variance
is also driven to 0. For a wide class of kernel regression estimators, the unconditional
variance and squared bias can be shown to be written as follows (H?ardle et al., 2004):
C1
+ C2 ? 4 ,
n? d
with C1 and C2 not depending on n nor on the dimension d. Hence an optimal bandwidth is
?1
chosen proportional to n 4+d , and the resulting generalization error (not counting the noise)
converges in n?4/(4+d) , which becomes very slow for large d. Consider for example the
increase in number of examples required to get the same level of error, in 1 dimension
versus d dimensions. If n1 is the number of examples required to get a level of error e,
(4+d)/5
to get the same level of error in d dimensions requires on the order of n1
examples,
i.e. the required number of examples is exponential in d. For the k-nearest neighbor
classifier, a similar result is obtained (Snapp and Venkatesh, 1998):
expected error =
expected error = E? +
?
X
cj n?j/d
j=2
where E? is the asymptotic error, d is the dimension and n the number of examples.
Note however that, if the data distribution is concentrated on a lower dimensional manifold,
it is the manifold dimension that matters. Indeed, for data on a smooth lower-dimensional
manifold, the only dimension that say a k-nearest neighbor classifier sees is the dimension
of the manifold, since it only uses the Euclidean distances between the near neighbors, and
if they lie on such a manifold then the local Euclidean distances approach the local geodesic
distances on the manifold (Tenenbaum, de Silva and Langford, 2000).
2
Minimum Number of Bases Required
In this section we present results showing the number of required bases (hence of training
examples) of a kernel machine with Gaussian kernel may grow linearly with the ?variations? of the target function that must be captured in order to achieve a given error level.
2.1
Result for Supervised Learning
The following theorem informs us about the number of sign changes that a Gaussian kernel
machine can achieve, when it has k bases (i.e. k support vectors, or at least k training
examples).
Theorem 2.1 (Theorem 2 of (Schmitt, 2002)). Let f : R ? R computed by a Gaussian
kernel machine (eq. 1) with k bases (non-zero ?i ?s). Then f has at most 2k zeros.
We would like to say something about kernel machines in Rd , and we can do this simply by
considering a straight line in Rd and the number of sign changes that the solution function
f can achieve along that line.
Corollary 2.2. Suppose that the learning problem is such that in order to achieve a given
error level for samples from a distribution P with a Gaussian kernel machine (eq. 1), then
f must change sign at least 2k times along some straight line (i.e., in the case of a classifier,
the decision surface must be crossed at least 2k times by that straight line). Then the kernel
machine must have at least k bases (non-zero ?i ?s).
Proof. Let the straight line be parameterized by x(t) = u + tw, with t ? R and kwk = 1
without loss of generality. Define g : R ? R by
g(t) = f (u + tw).
If f is a Gaussian kernel classifier with k 0 bases, then g can be written
k0
X
(t ? ti )2
g(t) = b +
?i exp ?
2? 2
i=1
where u + ti w is the projection of xi on the line Du,w = {u + tw, t ? R}, and ?i 6= 0.
The number of bases of g is k 00 ? k 0 , as there may exist xi 6= xj such that ti = tj . Since
g must change sign at least 2k times, thanks to theorem 2.1, we can conclude that g has at
least k bases, i.e. k ? k 00 ? k 0 .
The above theorem tells us that if we are trying to represent a function that locally varies a
lot (in the sense that its sign along a straight line changes many times), then we need many
training examples to do so with a Gaussian kernel machine. Note that it says nothing about
the dimensionality of the space, but we might expect to have to learn functions that vary
more when the data is high-dimensional. The next theorem confirms this suspicion in the
special case of the d-bits parity function:
Pd
1 if
d
i=1 bi is even
parity : (b1 , . . . , bd ) ? {0, 1} 7?
?1 otherwise
We will show that learning this apparently simple function with Gaussians centered on
points in {0, 1}d is difficult, in the sense that it requires a number of Gaussians exponential
in d (for a fixed Gaussian width). Note that our corollary 2.2 does not apply to the d-bits
parity function, so it represents another type of local variation (not along a line). However,
we are also able to prove a strong result about that case. We will use the following notations:
Xd
= {0, 1}d = {x1 , x2 , . . . , x2d }
Hd0
Hd1
= {(b1 , . . . , bd ) ? Xd | bd = 0}
(3)
= {(b1 , . . . , bd ) ? Xd | bd = 1}
(4)
We say that a decision function f : Rd ? R solves the parity problem if sign(f (xi )) =
parity(xi ) for all i in {1, . . . , 2d }.
P 2d
Lemma 2.3. Let f (x) =
i=1 ?i K? (xi , x) be a linear combination of Gaussians
with same width ? centered on points xi ? Xd . If f solves the parity problem, then
?i parity(xi ) > 0 for all i.
Proof. We prove this lemma by induction on d. If d = 1 there are only 2 points. Obviously
one Gaussian is not enough to classify correctly x1 and x2 , so both ?1 and ?2 are nonzero, and ?1 ?2 < 0 (otherwise f is of constant sign). Without loss of generality, assume
parity(x1 ) = 1 and parity(x2 ) = ?1. Then f (x1 ) > 0 > f (x2 ), which implies ?1 (1 ?
K? (x1 , x2 )) > ?2 (1 ? K? (x1 , x2 )) and ?1 > ?2 since K? (x1 , x2 ) < 1. Thus ?1 > 0
and ?2 < 0, i.e. ?i parity(xi ) > 0 for i ? {1, 2}.
Suppose now lemma 2.3 is true for d = d0 ? 1, and consider the case d = d0 . We denote
by x0i the points in Hd0 and by ?i0 their coefficient in the expansion of f (see eq. 3 for the
definition of Hd0 ). For x0i ? Hd0 , we denote by x1i ? Hd1 its projection on Hd1 (obtained by
setting its last bit to 1), whose coefficient in f is ?i1 . For any x ? Hd0 and x1j ? Hd1 we
have:
!
!
kx1j ? xk2
kx0j ? xk2
1
1
K? (xj , x) = exp ?
= exp ? 2 exp ?
2? 2
2?
2? 2
= ?K? (x0j , x)
where ? = exp ? 2?1 2 ? (0, 1). Thus f (x) for x ? Hd0 can be written
X
X
?j1 ?K? (x0j , x)
?i0 K? (x0i , x) +
f (x) =
x1j ?Hd1
x0i ?Hd0
=
X
x0i ?Hd0
?i0 + ??i1 K? (x0i , x).
Since Hd0 is isomorphic to Xd?1 , the restriction of f to Hd0 implicitely defines a function
over Xd?1 that solves the parity problem (because the last bit in Hd0 is 0, the parity is not
modified). Using our induction hypothesis, we have that for all x0i ? Hd0 :
?i0 + ??i1 parity(x0i ) > 0.
(5)
A similar reasoning can be made if we switch the roles of Hd0 and Hd1 . One has to be careful
that the parity is modified between Hd1 and its mapping to Xd?1 (because the last bit in Hd1
is 1). Thus we obtain that the restriction of (?f ) to Hd1 defines a function over Xd?1 that
solves the parity problem, and the induction hypothesis tells us that for all x1j ? Hd1 :
? ?j1 + ??j0
?parity(x1j ) > 0.
(6)
and the two negative signs cancel out. Now consider any x0i ? Hd0 and its projection
x1i ? Hd1 . Without loss of generality, assume parity(x0i ) = 1 (and thus parity(x1i ) = ?1).
Using eq. 5 and 6 we obtain:
?i0 + ??i1 > 0
?i1 + ??i0
< 0
It is obvious that for these two equations to be simultaneously verified, we need ? i0 and
?i1 to be non-zero and of opposite sign. Moreover, because ? ? (0, 1), ?i0 + ??i1 > 0 >
?i1 + ??i0 ? ?i0 > ?i1 , which implies ?i0 > 0 and ?i1 < 0, i.e. ?i0 parity(x0i ) > 0 and
?i1 parity(x1i ) > 0. Since this is true for all x0i in Hd0 , we have proved lemma 2.3.
P 2d
Theorem 2.4. Let f (x) = b + i=1 ?i K? (xi , x) be an affine combination of Gaussians
with same width ? centered on points xi ? Xd . If f solves the parity problem, then there
are at least 2d?1 non-zero coefficients ?i .
Proof. We begin with two preliminary results. First,given any xi ? Xd , the number of
points in Xd that differ from xi by exactly k bits is kd . Thus,
X
K? (xi , xj ) =
xj ?Xd
d
X
d
k=0
k2
exp ? 2 = c? .
k
2?
(7)
Second, it is possible to find a linear combination (i.e. without bias) of Gaussians g such
that g(xi ) = f (xi ) for all xi ? Xd . Indeed, let
X
g(x) = f (x) ? b +
?j K? (xj , x).
(8)
xj ?Xd
P
g verifies g(xi ) = f (xi ) iff xj ?Xd ?j K? (xj , xi ) = b, i.e. the vector ? satisfies the linear
system M? ? = b1, where M? is the kernel matrix whose element (i, j) is K? (xi , xj )
and 1 is a vector of ones. It is well known that M? is invertible as long as the xi are all
different, which is the case here (Micchelli, 1986). Thus ? = bM??1 1 is the only solution
to the system.
We now proceed to the proof of the theorem. By contradiction, suppose f solves the parity
problem with less than 2d?1 non-zero coefficients ?i . Then there exist two points xs and xt
in Xd such that ?s = ?t = 0 and parity(xs ) = 1 = ?parity(xt ). Consider the function
g defined as in eq. 8 with ? = bM??1 1. Since g(xi ) = f (xi ) for all xi ? Xd , g solves
the parity problem with a linear combination of Gaussians centered points in X d . Thus,
applying lemma 2.3, we have in particular that ?s parity(xs ) > 0 and ?t parity(xt ) > 0
(because ?s = ?t = 0), so that ?s ?t < 0. But, because of eq. 7, M? 1 = c? 1, which means
1 is an eigenvector of M? with eigenvalue c? > 0. Consequently, 1 is also an eigenvector
?1
?1
of M??1 with eigenvalue c?1
? > 0, and ? = bM? 1 = bc? 1, which is in contradiction
with ?s ?t < 0: f must therefore have at least 2d?1 non-zero coefficients.
The bound in theorem 2.4 is tight, since it is possible to solve the parity problem with
exactly 2d?1 Gaussians and a bias, for instance by using a negative bias and putting a
positive weight on each example satisfying parity(xi ) = 1. When trained to learn the
parity function, a SVM may learn a function that looks like the opposite of the parity on
test points (while still performing optimally on training points), but it is an artefact of the
specific geometry of the problem, and only occurs when the training set size is appropriate
compared to |Xd | = 2d (see (Bengio, Delalleau and Le Roux, 2005) for details). Note that
if the centers of the Gaussians are not restricted anymore to be points in Xd , it is possible
to solve the parity problem with only d + 1 Gaussians and no bias (Bengio, Delalleau and
Le Roux, 2005).
One may argue that parity is a simple discrete toy problem of little interest. But even if
we have to restrict the analysis to discrete samples in {0, 1}d for mathematical reasons, the
parity function can be extended to a smooth function on the [0, 1]d hypercube depending
only on the continuous sum b1 + . . . + bd . Theorem 2.4 is thus a basis to argue that the
number of Gaussians needed to learn a function with many variations in a continuous space
may scale linearly with these variations, and thus possibly exponentially in the dimension.
2.2
Results for Semi-Supervised Learning
In this section we focus on algorithms of the type described in recent papers (Zhu, Ghahramani and Lafferty, 2003; Zhou et al., 2004; Belkin, Matveeva and Niyogi, 2004; Delalleau,
Bengio and Le Roux, 2005), which are graph-based non-parametric semi-supervised learning algorithms. Note that transductive SVMs, which are another class of semi-supervised
algorithms, are already subject to the limitations of corollary 2.2. The graph-based algorithms we consider here can be seen as minimizing the following cost function, as shown
in (Delalleau, Bengio and Le Roux, 2005):
C(Y? ) = kY?l ? Yl k2 + ?Y? > LY? + ?kY? k2
(9)
with Y? = (?
y1 , . . . , y?n ) the estimated labels on both labeled and unlabeled data, and L the
(un-normalized) graph Laplacian derived from a similarity function W between points such
that Wij = W (xi , xj ) corresponds to the weights of the edges in the graph. Here, Y?l =
(?
y1 , . . . , y?l ) is the vector of estimated labels on the l labeled examples, whose known labels
are given by Yl = (y1 , . . . , yl ), and one may constrain Y?l = Yl as in (Zhu, Ghahramani and
Lafferty, 2003) by letting ? ? 0. We define a region with constant label as a connected
subset of the graph where all nodes xi have the same estimated label (sign of y?i ), and such
that no other node can be added while keeping these properties.
Proposition 2.5. After running a label propagation algorithm minimizing the cost of eq. 9,
the number of regions with constant estimated label is less than (or equal to) the number
of labeled examples.
Proof. By contradiction, if this proposition is false, then there exists a region with constant
estimated label that does not contain any labeled example. Without loss of generality,
consider the case of a positive constant label, with xl+1 , . . . , xl+q the q samples in this
region. The part of the cost of eq. 9 depending on their labels is
C(?
yl+1 , . . . , y?l+q ) =
l+q
? X
Wij (?
yi ? y?j )2
2
i,j=l+1
?
?
l+q
l+q
X
X
X
?
+ ?
Wij (?
yi ? y?j )2 ? + ?
y?i2 .
i=l+1
j ?{l+1,...,l+q}
/
i=l+1
The second term is stricly positive, and because the region we consider is maximal (by
definition) all samples xj outside of the region such that Wij > 0 verify y?j < 0 (for xi a
sample in the region). Since all y?i are stricly positive for i ? {l + 1, . . . , l + q}, this means
this second term can be stricly decreased by setting all y?i to 0 for i ? {l + 1, . . . , l + q}.
This also sets the first and third terms to zero (i.e. their minimum), showing that the set of
labels y?i are not optimal, which conflicts with their definition as labels minimizing C.
This means that if the class distributions are such that there are many distinct regions with
constant labels (either separated by low-density regions or regions with samples from the
other class), we will need at least the same number of labeled samples as there are such
regions (assuming we are using a sparse local kernel such as the k-nearest neighbor kernel,
or a thresholded Gaussian kernel). But this number could grow exponentially with the
dimension of the manifold(s) on which the data lie, for instance in the case of a labeling
function varying highly along each dimension, even if the label variations are ?simple? in
a non-local sense, e.g. if they alternate in a regular fashion. When the kernel is not sparse
(e.g. Gaussian kernel), obtaining such a result is less obvious. However, there often exists
a sparse approximation of the kernel. Thus we conjecture the same kind of result holds for
dense weight matrices, if the weighting function is local in the sense that it is close to zero
when applied to a pair of examples far from each other.
3
Extensions and Conclusions
In (Bengio, Delalleau and Le Roux, 2005) we present additional results that apply to unsupervised learning algorithms such as non-parametric manifold learning algorithms (Roweis
and Saul, 2000; Tenenbaum, de Silva and Langford, 2000; Scho? lkopf, Smola and M?uller,
1998; Belkin and Niyogi, 2003). We find that when the underlying manifold varies a lot
in the sense of having high curvature in many places, then a large number of examples is
required. Note that the tangent plane is defined by the derivatives of the kernel machine
function f , for such algorithms. The core result is that the manifold tangent plane at x
is mostly defined by the near neighbors of x in the training set (more precisely it is constrained to be in the span of the vectors x ? xi , with xi a neighbor of x). Hence one needs
to cover the manifold with small enough linear patches with at least d + 1 examples per
patch (where d is the dimension of the manifold).
In the same paper, we present a conjecture that generalizes the results presented here for
Gaussian kernel classifiers to a larger class of local kernels, using the same notion of locality of the derivative summarized above for manifold learning algorithms. In that case the
derivative of f represents the normal of the decision surface, and we find that at x it mostly
depends on the neighbors of x in the training set.
It could be argued that if a function has many local variations (hence is not very smooth),
then it is not learnable unless having strong prior knowledge at hand. However, this is not
true. For example consider functions that have low Kolmogorov complexity, i.e. can be
described by a short string in some language. The only prior we need in order to quickly
learn such functions (in terms of number of examples needed) is that functions that are
simple to express in that language (e.g. a programming language) are preferred. For example, the functions g(x) = sin(x) or g(x) = parity(x) would be easy to learn using
the C programming language to define the prior, even though the number of variations of
g(x) can be chosen to be arbitrarily large (hence also the number of required training examples when using only the smoothness prior), while keeping the Kolmogorov complexity
constant. We do not propose to necessarily focus on the Kolmogorov complexity to design
new learning algorithms, but we use this example to illustrate that it is possible to learn
apparently complex functions (because they vary a lot), as long as one uses a ?non-local?
learning algorithm, corresponding to a broad prior, not solely relying on the smoothness
prior. Of course, if additional domain knowledge about the task is available, it should be
used, but without abandoning research on learning algorithms that can address a wider
scope of problems. We hope that this paper will stimulate more research into such learning
algorithms, since we expect local learning algorithms (that only rely on the smoothness
prior) will be insufficient to make significant progress on complex problems such as those
raised by research on Artificial Intelligence.
Acknowledgments
The authors would like to thank the following funding organizations for support: NSERC,
MITACS, and the Canada Research Chairs. The authors are also grateful for the feedback
and stimulating exchanges that helped shape this paper, with Yann Le Cun and L?eon Bottou,
as well as for the anonymous reviewers? helpful comments.
References
Belkin, M., Matveeva, I., and Niyogi, P. (2004). Regularization and semi-supervised learning on large graphs. In Shawe-Taylor, J. and Singer, Y., editors, COLT?2004. Springer.
Belkin, M. and Niyogi, P. (2003). Using manifold structure for partially labeled classification. In Becker, S., Thrun, S., and Obermayer, K., editors, Advances in Neural
Information Processing Systems 15, Cambridge, MA. MIT Press.
Bengio, Y., Delalleau, O., and Le Roux, N. (2005). The curse of dimensionality for local
kernel machines. Technical Report 1258, D?epartement d?informatique et recherche
op?erationnelle, Universit?e de Montr?eal.
Bengio, Y., Delalleau, O., Le Roux, N., Paiement, J.-F., Vincent, P., and Ouimet, M. (2004).
Learning eigenfunctions links spectral embedding and kernel PCA. Neural Computation, 16(10):2197?2219.
Boser, B., Guyon, I., and Vapnik, V. (1992). A training algorithm for optimal margin
classifiers. In Fifth Annual Workshop on Computational Learning Theory, pages 144?
152, Pittsburgh.
Brand, M. (2003). Charting a manifold. In Becker, S., Thrun, S., and Obermayer, K.,
editors, Advances in Neural Information Processing Systems 15. MIT Press.
Cortes, C. and Vapnik, V. (1995). Support vector networks. Machine Learning, 20:273?
297.
Delalleau, O., Bengio, Y., and Le Roux, N. (2005). Efficient non-parametric function
induction in semi-supervised learning. In Cowell, R. and Ghahramani, Z., editors,
Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, Jan 6-8, 2005, Savannah Hotel, Barbados, pages 96?103. Society for Artificial
Intelligence and Statistics.
Duda, R. and Hart, P. (1973). Pattern Classification and Scene Analysis. Wiley, New York.
H?ardle, W., M?uller, M., Sperlich, S., and Werwatz, A. (2004). Nonparametric and Semiparametric Models. Springer, http://www.xplore-stat.de/ebooks/ebooks.html.
Micchelli, C. A. (1986). Interpolation of scattered data: distance matrices and conditionally positive definite functions. Constructive Approximation, 2:11?22.
Roweis, S. and Saul, L. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323?2326.
Schmitt, M. (2002). Descartes? rule of signs for radial basis function neural networks.
Neural Computation, 14(12):2997?3011.
Sch?olkopf, B., Burges, C. J. C., and Smola, A. J. (1999). Advances in Kernel Methods ?
Support Vector Learning. MIT Press, Cambridge, MA.
Sch?olkopf, B., Smola, A., and M?uller, K.-R. (1998). Nonlinear component analysis as a
kernel eigenvalue problem. Neural Computation, 10:1299?1319.
Snapp, R. R. and Venkatesh, S. S. (1998). Asymptotic derivation of the finite-sample risk of
the k nearest neighbor classifier. Technical Report UVM-CS-1998-0101, Department
of Computer Science, University of Vermont.
Tenenbaum, J., de Silva, V., and Langford, J. (2000). A global geometric framework for
nonlinear dimensionality reduction. Science, 290(5500):2319?2323.
Weiss, Y. (1999). Segmentation using eigenvectors: a unifying view. In Proceedings IEEE
International Conference on Computer Vision, pages 975?982.
Williams, C. and Rasmussen, C. (1996). Gaussian processes for regression. In Touretzky,
D., Mozer, M., and Hasselmo, M., editors, Advances in Neural Information Processing Systems 8, pages 514?520. MIT Press, Cambridge, MA.
Zhou, D., Bousquet, O., Navin Lal, T., Weston, J., and Scho? lkopf, B. (2004). Learning
with local and global consistency. In Thrun, S., Saul, L., and Scho? lkopf, B., editors,
Advances in Neural Information Processing Systems 16, Cambridge, MA. MIT Press.
Zhu, X., Ghahramani, Z., and Lafferty, J. (2003). Semi-supervised learning using Gaussian
fields and harmonic functions. In ICML?2003.
| 2810 |@word duda:2 confirms:1 epartement:1 reduction:2 series:1 bc:1 written:4 must:6 bd:6 j1:2 shape:1 intelligence:3 plane:2 werwatz:1 short:2 core:1 recherche:1 node:2 sperlich:1 mathematical:1 along:5 c2:2 become:1 prove:2 indeed:2 expected:3 nor:1 relying:1 little:1 curse:5 window:1 considering:1 becomes:1 begin:1 notation:1 moreover:1 underlying:1 what:1 kind:1 string:1 eigenvector:2 ti:3 xd:19 exactly:2 universit:2 classifier:7 k2:3 control:1 ly:1 positive:6 local:18 solely:3 interpolation:1 might:1 studied:1 suggests:1 bi:1 abandoning:1 acknowledgment:1 definite:2 jan:1 j0:1 projection:3 radial:1 regular:1 get:3 unlabeled:1 close:1 risk:1 applying:1 www:1 restriction:2 reviewer:1 missing:1 center:1 go:1 attention:1 williams:2 focused:1 qc:1 roux:10 contradiction:3 estimator:5 rule:1 embedding:3 notion:2 variation:10 target:3 controlling:1 suppose:3 olivier:1 programming:2 us:2 hypothesis:2 matveeva:3 lerouxni:1 element:1 satisfying:1 vermont:1 labeled:6 role:1 capture:1 region:11 connected:1 mozer:1 pd:1 complexity:4 geodesic:1 trained:1 depend:1 tight:1 grateful:1 basis:2 k0:1 kolmogorov:4 derivation:1 x2d:1 separated:1 distinct:1 informatique:1 effective:1 artificial:3 tell:2 labeling:1 hyper:1 choosing:1 outside:1 whose:3 larger:1 solve:2 delalleau:10 say:5 otherwise:2 niyogi:6 statistic:2 transductive:1 h3c:1 obviously:1 eigenvalue:3 propose:1 maximal:1 iff:1 achieve:4 roweis:3 description:1 olkopf:4 ky:2 converges:1 wider:1 depending:3 informs:1 montreal:1 stat:1 illustrate:1 x0i:12 nearest:6 op:1 progress:1 eq:9 solves:7 strong:2 c:1 implies:2 differ:1 artefact:1 downtown:1 centered:4 require:1 argued:1 exchange:1 generalization:1 preliminary:1 anonymous:1 proposition:2 ardle:2 extension:1 hold:1 normal:1 exp:6 mapping:1 scope:1 claim:1 vary:2 xk2:2 estimation:1 label:14 sensitive:2 hasselmo:1 uller:4 hope:1 mit:5 j7:1 gaussian:16 modified:2 zhou:3 varying:2 corollary:3 derived:1 focus:3 sense:6 helpful:1 savannah:1 i0:12 typically:1 wij:4 i1:11 classification:2 colt:1 html:1 constrained:1 special:2 raised:1 equal:2 field:1 having:2 represents:2 broad:1 look:1 unsupervised:4 cancel:1 icml:1 yoshua:1 report:2 belkin:6 modern:2 simultaneously:1 geometry:1 n1:2 attempt:1 montr:2 interest:2 organization:1 highly:2 investigate:2 xplore:1 unconditional:1 tj:1 edge:1 unless:1 euclidean:2 taylor:1 theoretical:1 instance:2 eal:2 classify:1 cover:2 zn:1 cost:3 subset:1 eigenmaps:2 optimally:1 varies:2 thanks:1 density:2 international:2 yl:5 barbados:1 invertible:1 parzen:1 quickly:1 squared:1 possibly:1 derivative:3 toy:1 account:1 potential:1 de:7 summarized:1 includes:1 coefficient:5 matter:1 explicitly:1 depends:1 crossed:1 helped:1 lot:3 view:1 apparently:3 kwk:1 variance:2 characteristic:1 lkopf:3 vincent:1 basically:1 straight:5 touretzky:1 definition:3 hotel:1 involved:1 obvious:2 proof:5 proved:1 logical:1 knowledge:4 dimensionality:7 cj:1 segmentation:1 x1j:4 supervised:11 wei:2 box:1 though:2 generality:4 mitacs:1 smola:5 langford:4 hand:1 navin:1 nonlinear:3 propagation:1 defines:2 stimulate:1 semisupervised:1 normalized:1 true:3 isomap:2 contain:1 verify:1 hence:5 regularization:1 symmetric:1 nonzero:1 i2:1 conditionally:1 sin:1 width:4 generalized:1 trying:1 silva:4 reasoning:1 harmonic:1 recently:1 umontreal:1 scho:3 funding:1 exponentially:2 significant:1 cambridge:4 smoothness:5 rd:3 consistency:2 language:4 shawe:1 similarity:3 surface:2 attracting:1 base:8 something:1 curvature:1 recent:2 driven:1 arbitrarily:1 yi:4 captured:1 minimum:2 additional:3 seen:1 semi:8 branch:1 d0:2 smooth:3 technical:2 long:2 hart:2 laplacian:3 descartes:1 regression:3 vision:1 kernel:37 sometimes:1 represent:1 c1:2 semiparametric:1 decreased:1 grow:2 crucial:1 sch:4 appropriately:1 eigenfunctions:1 comment:1 subject:1 lafferty:4 near:2 counting:1 bengio:11 enough:2 easy:1 switch:1 xj:11 zi:2 bandwidth:1 opposite:2 restrict:1 idea:1 pca:2 becker:2 suffer:2 proceed:1 york:1 eigenvectors:1 nonparametric:1 locally:4 tenenbaum:4 concentrated:1 svms:2 category:1 http:1 exist:4 sign:11 estimated:5 correctly:1 per:1 discrete:2 paiement:1 express:1 putting:1 verified:1 thresholded:1 tenth:1 graph:6 fraction:1 sum:1 parameterized:1 place:1 guyon:2 x0j:2 yann:1 patch:2 decision:3 prefer:1 bit:6 bound:1 annual:1 precisely:2 constrain:1 implicitely:2 x2:7 scene:1 bousquet:1 argument:1 span:1 chair:1 performing:1 conjecture:2 structured:1 department:1 alternate:1 combination:5 kd:3 tw:3 cun:1 restricted:1 equation:1 needed:2 singer:1 letting:1 ouimet:1 generalizes:1 gaussians:10 available:1 apply:2 worthwhile:1 spectral:4 appropriate:1 anymore:1 clustering:2 running:1 unifying:1 eon:1 ghahramani:5 classical:3 hypercube:1 society:1 micchelli:2 question:1 added:2 occurs:1 already:1 parametric:8 erationnelle:1 obermayer:2 distance:4 thank:1 hd0:15 thrun:3 link:1 manifold:19 argue:2 extent:1 iro:2 induction:4 reason:1 charting:2 assuming:1 insufficient:1 minimizing:3 optionally:1 difficult:1 mostly:3 negative:2 design:1 finite:1 supporting:1 extended:1 variability:1 y1:3 canada:2 venkatesh:2 pair:1 required:7 z1:1 lal:1 conflict:1 learned:3 boser:2 address:1 able:1 pattern:1 rely:3 stricly:3 zhu:4 suspicion:1 prior:11 review:1 geometric:1 tangent:2 asymptotic:2 loss:4 expect:2 limitation:2 proportional:2 versus:1 affine:1 schmitt:2 editor:6 share:2 course:1 parity:35 rasmussen:2 last:3 keeping:2 bias:7 lle:1 burges:2 neighbor:11 saul:4 fall:1 taking:1 wide:1 fifth:1 sparse:3 feedback:1 dimension:12 author:2 made:1 bm:3 far:1 obtains:1 preferred:1 global:2 delallea:1 b1:5 pittsburgh:1 conclude:1 xi:33 continuous:2 un:1 learn:8 nicolas:1 ca:1 obtaining:1 du:1 expansion:1 bottou:1 complex:3 necessarily:1 domain:2 main:1 dense:1 linearly:2 noise:1 snapp:2 nothing:1 verifies:1 allowed:1 x1:7 scattered:1 fashion:1 slow:1 wiley:1 exponential:2 x1i:4 bengioy:1 lie:2 xl:2 third:1 weighting:1 theorem:10 specific:2 xt:3 showing:2 learnable:1 x:3 cortes:2 svm:1 exists:2 workshop:2 vapnik:4 false:1 margin:1 locality:2 simply:1 expressed:4 nserc:1 partially:1 scalar:1 cowell:1 springer:2 corresponds:1 satisfies:1 ma:4 stimulating:1 weston:1 goal:1 consequently:1 careful:1 change:5 typical:1 lemma:5 called:1 hd1:11 isomorphic:1 brand:2 support:5 constructive:1 dept:1 |
1,994 | 2,811 | Inference with Minimal Communication:
a Decision-Theoretic Variational Approach
O. Patrick Kreidl and Alan S. Willsky
Department of Electrical Engineering and Computer Science
MIT Laboratory for Information and Decision Systems
Cambridge, MA 02139
{opk,willsky}@mit.edu
Abstract
Given a directed graphical model with binary-valued hidden nodes and
real-valued noisy observations, consider deciding upon the maximum
a-posteriori (MAP) or the maximum posterior-marginal (MPM) assignment under the restriction that each node broadcasts only to its children
exactly one single-bit message. We present a variational formulation,
viewing the processing rules local to all nodes as degrees-of-freedom,
that minimizes the loss in expected (MAP or MPM) performance subject
to such online communication constraints. The approach leads to a novel
message-passing algorithm to be executed offline, or before observations
are realized, which mitigates the performance loss by iteratively coupling all rules in a manner implicitly driven by global statistics. We also
provide (i) illustrative examples, (ii) assumptions that guarantee convergence and efficiency and (iii) connections to active research areas.
1
Introduction
Given a probabilistic model with discrete-valued hidden variables, Belief Propagation (BP)
and related graph-based algorithms are commonly employed to solve for the Maximum APosteriori (MAP) assignment (i.e., the mode of the joint distribution of all hidden variables)
and Maximum-Posterior-Marginal (MPM) assignment (i.e., the modes of the marginal distributions of every hidden variable) [1]. The established ?message-passing? interpretation
of BP extends naturally to a distributed network setting: associating to each node and
edge in the graph a distinct processor and communication link, respectively, the algorithm
is equivalent to a sequence of purely-local computations interleaved with only nearestneighbor communications. Specifically, each computation event corresponds to a node
evaluating its local processing rule, or a function by which all messages received in the
preceding communication event map to messages sent in the next communication event.
Practically, the viability of BP appears to rest upon an implicit assumption that network
communication resources are abundant. In a general network, because termination of the algorithm is in question, the required communication resources are a-priori unbounded. Even
when termination can be guaranteed, transmission of exact messages presumes communication channels with infinite capacity (in bits per observation), or at least of sufficiently
high bandwidth such that the resulting finite message precision is essentially error-free. In
some distributed settings (e.g., energy-limited wireless sensor networks), it may be prohibitively costly to justify such idealized online communications. While recent evidence
suggests substantial but ?small-enough? message errors will not alter the behavior of BP
[2], [3], it also suggests BP may perform poorly when communication is very constrained.
Assuming communication constraints are severe, we examine the extent to which alternative processing rules can avoid a loss in (MAP or MPM) performance. Specifically, given
a directed graphical model with binary-valued hidden variables and real-valued noisy observations, we assume each node may broadcast only to its children a single binary-valued
message. We cast the problem within a variational formulation [4], seeking to minimize a
decision-theoretic penalty function subject to such online communication constraints. The
formulation turns out to be an extension of the optimization problem underlying the decentralized detection paradigm [5], [6], which advocates a team-theoretic [7] relaxation of the
original problem to both justify a particular finite parameterization for all local processing
rules and obtain an iterative algorithm to be executed offline (i.e., before observations are
realized). To our knowledge, that this relaxation permits analytical progress given any directed acyclic network is new. Moreover, for MPM assignment in a tree-structured network,
we discover an added convenience with respect to the envisioned distributed processor setting: the offline computation itself admits an efficient message-passing interpretation.
This paper is organized as follows. Section 2 details the decision-theoretic variational formulation for discrete-variable assignment. Section 3 summarizes the main results derived
from its connection to decentralized detection, culminating in the offline message-passing
algorithm and the assumptions that guarantee convergence and maximal efficiency. We
omit the mathematical proofs [8] here, focusing instead on intuition and illustrative examples. Closing remarks and relations to other active research areas appear in Section 4.
2
Variational Formulation
In abstraction, the basic ingredients are (i) a joint distribution p(x, y) for two length-N
random vectors X and Y , taking hidden and observable values in the sets {0, 1}N and
RN , respectively; (ii) a decision-theoretic penalty function J : ? ? R, where ? denotes
the set of all candidate strategies ? : RN ? {0, 1}N for posterior assignment; and (iii) the
set ?G ? ? of strategies that also respect stipulated communication constraints in a given
N -node directed acyclic network G. The ensuing optimization problem is expressed by
J(? ? ) = min J(?) subject to ? ? ?G ,
(1)
???
?
where ? then represents an optimal network-constrained strategy for discrete-variable assignment. The following subsections provide details unseen at this level of abstraction.
2.1
Decision-Theoretic Penalty Function
Let U = ?(Y ) denote the decision process induced from the observation process Y by
any candidate assignment strategy ? ? ?. If we associate a numeric ?cost? c(u, x) to every
possible joint realization of (U, X), then the expected cost is a well-posed penalty function:
J(?) = E [c (?(Y ), X)] = E [E [c(?(Y ), X) | Y ]] .
(2)
Expanding the inner expectation and recognizing p(x|y) to be proportional to p(x)p(y|x)
for every y such that p(y) > 0, it follows that ?? ? minimizes (2) over ? if and only if
X
?? ? (Y ) = arg min
p(x)c(u, x)p(Y |x) with probability one.
(3)
u?{0,1}N
x?{0,1}N
Of note are (i) the likelihood function p(Y |x) is a finite-dimensional sufficient statistic
of Y , (ii) real-valued coefficients ?b(u, x) provide a finite parameterization of the function
space ? and (iii) optimal coefficient values ?b? (u, x) = p(x)c(u, x) are computable offline.
Before introducing communication constraints, we illustrate by examples how the decisiontheoretic penalty function relates to familiar discrete-variable assignment problems.
Example 1: Let c(u, x) indicate whether u 6= x. Then (2) and (3) specialize to, respectively,
the word error rate (viewing each x as an N -bit word) and the MAP strategy:
?? ? (Y ) = arg max p(x|Y ) with probability one.
x?{0,1}N
PN
Example 2: Let c(u, x) = n=1 cn (un , xn ), where each cn indicates whether un 6= xn .
Then (2) and (3) specialize to, respectively, the bit error rate and the MPM strategy:
?? ? (Y ) = arg max p(x1 |Y ), . . . , arg max p(xN |Y )
with probability one.
x1 ?{0,1}
2.2
xN ?{0,1}
Network Communication Constraints
Let G(V, E) be any directed acyclic graph with vertex set V = {1, . . . , N } and edge set
E = {(i, j) ? V ? V | i ? ?(j) ? j ? ?(i)},
where index sets ?(n) ? V and ?(n) ? V indicate, respectively, the parents and children
of each node n ? V. Without loss-of-generality, we assume the node labels respect the
natural partial-order implied by the graph G; specifically, we assume every node n has
parent nodes ?(n) ? {1, . . . , n?1} and child nodes ?(n) ? {n+1, . . . , N }. Local to each
node n ? V are the respective components Xn and Yn of the joint process (X, Y ). Under
best-case assumptions on p(x, y) and G, Belief Propagation methods (e.g., max-product
in Example 1, sum-product in Example 2) require at least 2|E| real-valued messages per
observation Y = y, one per direction along each edge in G. In contrast, we insist upon
a single forward-pass through G where each node n broadcasts to its children (if any) a
single binary-valued message. This yields communication overhead of only |E| bits per
observation Y = y, but also renders the minimizing strategy of (3) infeasible.
Accepting that performance-communication tradeoffs are inherent to distributed algorithms, we proceed with the goal of minimizing the loss in performance relative to J(?
? ? ).
Specifically, we now translate the stipulated restrictions on communication into explicit
constraints on the function space ? over which to minimize (2). The simplest such translation assumes the binary-valued message produced by node n also determines the respective
component un in decision vector u = ?(y). Recognizing that every node n receives the
messages u?(n) from its parents (if any) as side information to yn , any function of the form
?n : R ? {0, 1}|?(n)| ? {0, 1} is a feasible processing rule; we denote the set of all such
rules by ?n . Then, every strategy in the set ?G = ?1 ? ? ? ? ? ?N respects the constraints.
3
Summary of Main Results
As stated in Section 1, the variational formulation presented in Section 2 can be viewed as
an extension of the optimization problem underlying decentralized Bayesian detection [5],
[6]. Even for specialized network structures (e.g., the N -node chain), it is known that exact
solution to (1) is NP-hard, stemming from the absence of a guarantee that ? ? ? ?G possesses a finite parameterization. Also known is that analytical progress can be made for a
?
)
relaxation of (1), which is based on the following intuition: if strategy ? ? = (?1? , . . . , ?N
G
is optimal over ? , then for each n and assuming all components i ? V\n are fixed at
rules ?i? , the component rule ?n? must be optimal over ?n . Decentralized detection has
roots in team decision theory [7], a subset of game theory, in which the relaxation is named
person-by-person (pbp) optimality. While global optimality always implies pbp-optimality,
the converse is false?in general, there can be multiple pbp-optimal solutions with varying
penalty. Nonetheless, pbp-optimality (along with a specialized observation process) justifies a particular finite parameterization for the function space ?G , leading to a nonlinear
fixed-point equation and an iterative algorithm with favorable convergence properties. Before presenting the general algorithm, we illustrate its application in two simple examples.
Example 3: Consider the MPM assignment problem in Example 2, assuming N = 2 and
distribution p(x, y) is defined by positive-valued parameters ?, ?1 and ?2 as follows:
N
Y
(y ? ?n xn )2
1
1 , x1 = x2
? exp ? n
.
p(x) ?
and p(y|x) =
? , x1 6= x2
2
2?
n=1
Note that X1 and X2 are marginally uniform and ? captures their correlation (positive,
zero, or negative when ? is less than, equal to, or greater than unity, respectively), while Y
captures the presence of additive white Gaussian noise with signal-to-noise ratio at node n
equal to ?n . The (unconstrained) MPM strategy ?? ? simplifies to a pair of threshold rules
u1 = 1
L1 (y1 )
>
<
??1? =
u1 = 0
u2 = 1
1 + ?L2 (y2 )
? + L2 (y2 )
and L2 (y2 )
>
<
??2? =
u2 = 0
1 + ?L1 (y1 )
,
? + L1 (y1 )
where Ln (yn ) = exp [?n (yn ? ?n /2)] denotes the likelihood-ratio local to node n. Let
E = {(1, 2)} and define two network-constrained strategies: myopic strategy ? 0 employs
thresholds ?10 = ?20 = 1, meaning each node n acts to minimize Pr[Un 6= Xn ] as if in isolation, whereas heuristic strategy ? h employs thresholds ?1h = ?10 and ?2h = ?2u1 ?1 , meaning
node 2 adjusts its threshold as if X1 = u1 (i.e., as if the myopic decision by node 1 is always
correct). Figure 1 compares these strategies and a pbp-optimal strategy ? k ?only ? k is both
feasible and consistently ?hedging? against all uncertainty i.e., J(? 0 ) ? J(? k ) ? J(?
? ? ).
L1
?
0.8
(1, 1)
1
? (0, 0)
0.6
0.4
(0, 1)
? 1 ??1 L2
(a) Shown for ? < 1
0
J(?)
??
?0
?h
?k
?1 (1, 0)
J(?)
?
0.4
0.6
0.5
1
2.5
?1
(b) With ? = 0.1, ?2 = 1
.01
1
100
?
(c) With ?1 = ?2 = 1
Figure 1. Comparison of the four alternative strategies in Example 3: (a) sketch of the decision
regions in likelihood-ratio space, showing that network-constrained threshold rules cannot exactly
reproduce ?? ? (unless ? = 1); (b) bit-error-rate versus ?1 with ? and ?2 fixed, showing ? h performs
comparably to ? k when Y1 is accurate relative to Y2 but otherwise performs worse than even ? 0
(which requires no communication); (c) bit-error-rate versus ? with ?1 and ?2 fixed, showing ? k
uses the allotted bit of communication such that roughly 35% of the loss J(? 0 ) ? J(?
? ? ) is recovered.
Example 4: Extend Example 3 to N > 2 nodes, but assuming X is equally-likely to be
all zeros or all ones (i.e., the extreme case of positive correlation) and Y has identicallyaccurate
Q components with ?n = 1 for all n. The MPM strategy employs thresholds
??n? = i?V\n 1/Li (yi ) for all n, leading to U = ?? ? (Y ) also being all zeros or all ones;
thus, its cost distribution, or the probability mass function for c(?
? ? (Y ), X), has mass only
on the values 0 and N . The myopic strategy employs thresholds ?n0 = 1 for all n, leading to
independent and identically-distributed (binary-valued) random variables cn (?n0 (Yn ), Xn );
thus, its cost distribution, approaching a normal shape as N gets large, has mass on all values 0, 1, . . . , N . Figure 2 considers a particular directed network G and, initializing to ? 0 ,
shows the sequence of cost distributions resulting from the iterative offline algorithm?note
the shape progression towards the cost distribution of the (infeasible) MPM strategy and
the successive reduction in bit-error-rate J(? k ). Also noteworthy is the rapid convergence
and the successive reduction in word-error-rate Pr[c(? k (Y ), X) 6= 0].
Cost Distribution per Iteration k = 0, 1, . . .
?6k u6
u1
0.4
k
?10
?4k
?2k u2
?5k
u4
u5
?7k
u7
k
?11
?8k
u8
?3k u3
k
?12
?9k u9
0
u10
u11
probability
mass function
?1k
J(? ) = 3.7
J(? 1 ) = 2.9
J(? 2 ) = 2.8
J(? 3 ) = 2.8
0
0
0
0
0.3
0.2
0.1
0
u12
4
8
12
4
8
12
4
8
12
4
8
12
number of bit errors
Figure 2. Illustration of the iterative offline computation given p(x, y) as described in Example 4
and the directed network shown (N = 12). A Monte-Carlo analysis of ?? ? yields an estimate for its
bit-error-rate of J(?
? ? ) ? 0.49 (with standard deviation of 0.05)?thus, with a total of just |E| = 11
bits of communication, the pbp-optimal strategy ? 3 recovers roughly 28% of the loss J(? 0 ) ? J(?
? ? ).
3.1
Necessary Optimality Conditions
We start by providing an explicit probabilistic interpretation of the general problem in (1).
Lemma 1 The minimum penalty J(? ? ) defined in (1) is, firstly, achievable by a deterministic1 strategy and, secondly, equivalently defined by
Z
X
X
J(? ? ) = min
p(x)
c(u, x)
p(u|y)p(y|x) dy
p(u|y)
x?{0,1}N
u?{0,1}N
subject to p(u|y) =
Y
n?V
y?RN
p(un |yn , u?(n) ).
Lemma 1 is primarily of conceptual value, establishing a correspondence between fixing a component rule ?n ? ?n and inducing a decision process Un from the information
(Yn , U?(n) ) local to node n. The following assumption permits analytical progress towards
a finite parameterization for each function space ?n and the basis of an offline algorithm.
Q
Assumption 1 The observation process Y satisfies p(y|x) = n?V p(yn |x).
Lemma 2 Let Assumption 1 hold. Upon fixing a deterministic rule ?n ? ?n local to node
n (in correspondence with p(un |yn , u?(n) ) by virtue of Lemma 1), we have the identity
Z
p(un |x, u?(n) ) =
p(un |yn , u?(n) )p(yn |x) dyn .
(4)
yn ?R
Moreover, upon fixing a deterministic strategy ? ? ?G , we have the identity
Y
p(u|x) =
p(un |x, u?(n) ).
(5)
n?V
Lemma 2 implies fixing component rule ?n ? ?n is in correspondence with inducing the
conditional distribution p(un |x, u?(n) ), now a probabilistic description that persists local
to node n no matter the rule ?i at any other node i ? V\n. Lemma 2 also introduces further
structure in the constrained optimization expressed by Lemma 1: recognizing the integral
over RN to equal p(u|x), (4) and (5) together imply it can be expressed as a product of
1
A randomized (or mixed) strategy, modeled as a probabilistic selection from a finite collection
of deterministic strategies, takes more inputs than just the observation process Y . That deterministic
strategies suffice, however, justifies ?post-hoc? our initial abuse of notation for elements in the set ?.
component integrals, each over R. We now argue that, despite these simplifications, the
component rules of ? ? continue to be globally coupled.
Starting with any deterministic strategy ? ? ?G , consider optimizing the nth component
rule ?n over ?n assuming all other components stay fixed. With ?n a degree-of-freedom,
decision process Un is no longer well-defined so each un ? {0, 1} merely represents a
candidate decision local to node n. Online, each local decision will be made only upon receiving both the local observation Yn = yn and all parents? local decisions U?(n) = u?(n) .
It follows that node n, upon deciding a particular un , may assert that random vector U is restricted to values in the subset U[u?(n) , un ] = {u? ? {0, 1}N | u??(n) = u?(n) , u?n = un }.
Then, viewing (Yn , U?(n) ) as a composite local observation and proceeding in the manner
by which (3) is derived, the pbp-optimal relaxation of (1) reduces to the following form.
Proposition 1 Let Assumption 1 hold. In an optimal network-constrained strategy
? ? ? ?G , for each n and assuming all components i ? V\n are fixed at rules ?i? (each
in correspondence with p? (ui |x, u?(i) ) by virtue of Lemma 2), the rule ?n? satisfies
X
?n? (Yn , U?(n) ) = arg min
b?n (un , x; U?(n) )p(Yn |x) with probability one
un ?{0,1}
x?{0,1}N
(6)
where, for each u?(n) ? {0, 1}|?(n)| ,
b?n (un , x; u?(n) ) = p(x)
X
c(u, x)
u?U[u?(n) ,un ]
Y
i?V\n
p? (ui |x, u?(i) ).
(7)
Of note are (i) the likelihood function p(Yn |x) is a finite-dimensional sufficient statistic of
Yn , (ii) real-valued coefficients bn provide a finite parameterization of the function space
?n and (iii) the pbp-optimal coefficient values b?n , while still computable offline, also depend on the distributions p? (ui |x, u?(i) ) in correspondence with all fixed rules ?i? .
3.2
Offline Message-Passing Algorithm
Let fn map from coefficients {bi ; i ? V\n} to coefficients bn by the following operations:
1. for each i ? V\n, compute p(ui |x, u?(i) ) via (4) and (6) given bi and p(yi |x);
2. compute bn via (7) given p(x), c(u, x) and {p(ui |x, u?(i) ); i ? V\n}.
Then, the simultaneous
satisfaction of Proposition 1 at all N nodes can be viewed as a
P
system of 2N +1 n?V 2|?(n)| nonlinear equations in as many unknowns,
bn = fn (b1 , . . . , bn?1 , bn+1 , . . . , bN ),
n = 1, . . . , N,
(8)
or, more concisely, b = f (b). The connection between each fn and Proposition 1 affords
an equivalence between solving the fixed-point equation f via a Gauss-Seidel iteration and
minimizing J(?) via a coordinate-descent iteration [9], implying an algorithm guaranteed
to terminate and achieve penalty no greater than that of an arbitrary initial strategy ? 0 ? b0 .
Proposition 2 Initialize to any coefficients b0 = (b01 , . . . , b0N ) and generate the sequence
{bk } using a component-wise iterative application of f in (8) i.e., for k = 1, 2, . . . ,
k
k
bkn := fn (bk?1
, . . . , bk?1
1
n?1 , bn+1 , . . . , bN ),
n = N, N ? 1, . . . , 1.
(9)
If Assumption 1 holds, the associated sequence {J(? k )} is non-increasing and converges:
J(? 0 ) ? J(? 1 ) ? ? ? ? ? J(? k ) ? J ? ? J(? ? ) ? J(?
? ? ).
Direct implementation of (9) is clearly imprudent from a computational perspective, because the transformation from fixed coefficients bkn to the corresponding distribution
pk (un |x, u?(n) ) need not be repeated within every component evaluation of f . In fact,
assuming every node n stores in memory its own likelihood function p(yn |x), this transformation can be accomplished locally (cf. (4) and (6)) and, also assuming the resulting
distribution is broadcast to all other nodes before they proceed with their subsequent component evaluation of f , the termination guarantee of Proposition 2 is retained. Requiring
every node to perform a network-wide broadcast within every iteration k makes (9) a decidedly global algorithm, not to mention that each node n must also store in memory p(x, yn )
and c(u, x) to carry forth the supporting local computations.
P
Assumption 2 The cost function satisfies c(u, x) = n?V cn (un , x) for some collection
of functions {cn : {0, 1}N +1 ? R} and the directed graph G is tree-structured.
Proposition 3 Under Assumption 2, the following two-pass procedure is identical to (9):
? Forward-pass at node n: upon receiving messages from all parents i ? ?(n), store them
for use in the next reverse-pass and send to each child j ? ?(n) the following messages:
X
Y
k
k
Pn?j
(un |x) :=
pk?1 un |x, u?(n)
Pi?n
(ui |x).
(10)
u?(n) ?{0,1}|?(n)|
i??(n)
? Reverse-pass at node n: upon receiving messages from all children j ? ?(n), update
?
?
Y
X
k
k
bkn un , x; u?(n) := p(x)
Pi?n
(ui |x) ?cn (un , x) +
Cj?n
(un , x)? (11)
i??(n)
j??(n)
k
and the corresponding distribution p (un |x, u?(n) ) via (4) and (6), store the distribution
for use in the next forward pass and send to each parent i ? ?(n) the following messages:
?
?
X
X
k
k
Cn?i
(ui , x) :=
p(un |x, ui ) ?cn (un , x) +
Cj?n
(un , x)? ,
(12)
un ?{0,1}
p(un |x, ui )
=
j??(n)
X
p
u?(n) ?{u? ?{0,1}|?(n)| |u?i =ui }
k
un |x, u?(n)
Y
???(n)\i
k
P??n
(u? |x).
An intuitive interpretation of Proposition 3, from the perspective of node n, is as follows.
From (10) in the forward pass, the messages received from each parent define what, during subsequent online operation, that parent?s local decision means (in a likelihood sense)
about its ancestors? outputs and the hidden process. From (12) in the reverse pass, the messages received from each child define what the local decision will mean (in an expected
cost sense) to that child and its descendants. From (11), both types of incoming messages
impact the local rule update and, in turn, the outgoing messages to both types of neighbors.
While Proposition 3 alleviates the need for the iterative global broadcast of distributions
pk (un |x, u?(n) ), the explicit dependence of (10)-(12) on the full vector x implies the memory and computation requirements local to each node can still be exponential in N .
Q
Assumption 3 The hidden process X is Markov on G, or p(x) = n?V p(xn |x?(n) ), and
all component likelihoods/costs satisfy p(yn |x) = p(yn |xn ) and cn (un , x) = cn (un , xn ).
Proposition 4 Under Assumption 3, the iterates in Proposition 3 specialize to the form of
bkn (un , xn ; u?(n) ),
k
Pn?j
(un |xn )
k
and Cn?i
(ui , xi ),
k = 0, 1, . . .
and each node n need only store in memory p(x?(n) , xn , yn ) and cn (un , xn ) to carry forth
the supporting local computations. (The actual equations can be found in [8].)
Proposition 4 implies the convergence properties of Proposition 2 are upheld with maximal
efficiency (linear
Q in N ) when G is tree-structured and the global
P distribution and costs satisfy p(x, y) = n?V p(xn |x?(n) )p(yn |xn ) and c(u, x) = n?V cn (un , xn ), respectively.
Note that these conditions hold for the MPM assignment problems in Examples 3 & 4.
4
Discussion
Our decision-theoretic variational approach reflects several departures from existing methods for communication-constrained inference. Firstly, instead of imposing the constraints
on an algorithm derived from an ideal model, we explicitly model the constraints and derive a different algorithm. Secondly, our penalty function drives the approximation by the
desired application of inference (e.g., posterior assignment) as opposed to a generic error
measure on the result of inference (e.g., divergence in true and approximate marginals).
Thirdly, the necessary offline computation gives rise to a downside, namely less flexibility
against time-varying statistical environments, decision objectives or network conditions.
Our development also evokes principles in common with other research areas. Similar to
the sum-product version of Belief Propagation (BP), our message-passing algorithm originates assuming a tree structure, an additive cost and a synchronous message schedule. It is
thus enticing to claim that the maturation of BP (e.g., max-product, asynchronous schedule, cyclic graphs) also applies, but unique aspects to our development (e.g., directed graph,
weak convergence, asymmetric messages) merit caution. That we solve for correlated equilibria and depend on probabilistic structure commensurate with cost structure for efficiency
is in common with graphical games [10], which distinctly are formulated on undirected
graphs and absent of hidden variables. Finally, our offline computation resembles learning
a conditional random field [11], in the sense that factors of p(u|x) are iteratively modified
to reduce penalty J(?); online computation via strategy u = ?(y), repeated per realization
Y = y, is then viewed as sampling from this distribution. Along the learning thread, a
special case of our formulation appears in [12], but assuming p(x, y) is unknown.
Acknowledgments
This work supported by the Air Force Office of Scientific Research under contract FA955004-1 and by the Army Research Office under contract DAAD19-00-1-0466. We are grateful
to Professor John Tsitsiklis for taking time to discuss the correctness of Proposition 1.
References
[1] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988.
[2] L. Chen, et al. Data association based on optimization in graphical models with application to
sensor networks. Mathematical and Computer Modeling, 2005. To appear.
[3] A. T. Ihler, et al. Message errors in belief propagation. Advances in NIPS 17, MIT Press, 2005.
[4] M. I. Jordan, et al. An introduction to variational methods for graphical models. Learning in
Graphical Models, pp. 105?161, MIT Press, 1999.
[5] J. N. Tsitsiklis. Decentralized detection. Adv. in Stat. Sig. Proc., pp. 297?344, JAI Press, 1993.
[6] P. K. Varshney. Distributed Detection and Data Fusion. Springer-Verlag, 1997.
[7] J. Marschak and R. Radner. The Economic Theory of Teams. Yale University Press, 1972.
[8] O. P. Kreidl and A. S. Willsky. Posterior assignment in directed graphical models with minimal
online communication. Available: http://web.mit.edu/opk/www/res.html
[9] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, 1995.
[10] S. Kakade, et al. Correlated equilibria in graphical games. ACM-CEC, pp. 42?47, 2003.
[11] J. Lafferty, et al. Conditional random fields: Probabilistic models for segmenting and labeling
sequence data. ICML, 2001.
[12] X. Nguyen, et al. Decentralized detection and classification using kernel methods. ICML,2004.
| 2811 |@word version:1 achievable:1 termination:3 bn:9 u11:1 mention:1 carry:2 reduction:2 initial:2 cyclic:1 existing:1 recovered:1 b01:1 must:2 john:1 stemming:1 fn:4 subsequent:2 additive:2 shape:2 update:2 n0:2 implying:1 parameterization:6 mpm:11 accepting:1 iterates:1 node:39 successive:2 firstly:2 unbounded:1 mathematical:2 along:3 direct:1 descendant:1 specialize:3 advocate:1 overhead:1 manner:2 expected:3 rapid:1 roughly:2 behavior:1 examine:1 insist:1 globally:1 actual:1 increasing:1 discover:1 underlying:2 moreover:2 suffice:1 mass:4 notation:1 what:2 minimizes:2 caution:1 transformation:2 guarantee:4 assert:1 every:10 act:1 exactly:2 prohibitively:1 originates:1 converse:1 omit:1 appear:2 yn:25 bertsekas:1 segmenting:1 before:5 positive:3 engineering:1 persists:1 local:20 despite:1 establishing:1 noteworthy:1 abuse:1 resembles:1 nearestneighbor:1 equivalence:1 suggests:2 limited:1 bi:2 directed:10 unique:1 acknowledgment:1 daad19:1 procedure:1 area:3 composite:1 word:3 get:1 convenience:1 cannot:1 selection:1 restriction:2 equivalent:1 map:7 deterministic:5 www:1 send:2 starting:1 rule:21 adjusts:1 u6:1 coordinate:1 exact:2 programming:1 us:1 sig:1 associate:1 element:1 asymmetric:1 u4:1 electrical:1 capture:2 initializing:1 region:1 adv:1 substantial:1 intuition:2 envisioned:1 environment:1 ui:12 bkn:4 depend:2 solving:1 grateful:1 purely:1 upon:9 efficiency:4 basis:1 joint:4 distinct:1 monte:1 labeling:1 heuristic:1 posed:1 valued:13 solve:2 otherwise:1 statistic:3 unseen:1 noisy:2 itself:1 online:7 hoc:1 sequence:5 analytical:3 maximal:2 product:5 realization:2 radner:1 translate:1 poorly:1 achieve:1 alleviates:1 flexibility:1 forth:2 description:1 inducing:2 intuitive:1 convergence:6 parent:8 transmission:1 requirement:1 converges:1 coupling:1 illustrate:2 derive:1 stat:1 fixing:4 b0:2 received:3 progress:3 culminating:1 indicate:2 implies:4 direction:1 correct:1 stipulated:2 viewing:3 require:1 proposition:13 secondly:2 extension:2 hold:4 practically:1 sufficiently:1 normal:1 deciding:2 exp:2 equilibrium:2 claim:1 u3:1 enticing:1 favorable:1 proc:1 label:1 correctness:1 reflects:1 mit:5 clearly:1 sensor:2 always:2 gaussian:1 modified:1 avoid:1 pn:3 varying:2 office:2 derived:3 consistently:1 likelihood:7 indicates:1 contrast:1 sense:3 posteriori:1 inference:4 abstraction:2 hidden:9 relation:1 ancestor:1 reproduce:1 arg:5 classification:1 html:1 priori:1 development:2 constrained:7 special:1 initialize:1 marginal:3 equal:3 field:2 sampling:1 identical:1 represents:2 icml:2 alter:1 np:1 intelligent:1 inherent:1 employ:4 primarily:1 divergence:1 familiar:1 freedom:2 detection:7 message:28 evaluation:2 severe:1 introduces:1 extreme:1 dyn:1 myopic:3 chain:1 accurate:1 edge:3 integral:2 partial:1 necessary:2 respective:2 unless:1 tree:4 abundant:1 desired:1 re:1 minimal:2 modeling:1 downside:1 opk:2 assignment:13 cost:13 introducing:1 vertex:1 subset:2 deviation:1 uniform:1 recognizing:3 person:2 randomized:1 stay:1 marschak:1 probabilistic:7 contract:2 receiving:3 together:1 opposed:1 broadcast:6 worse:1 leading:3 presumes:1 li:1 coefficient:8 matter:1 satisfy:2 explicitly:1 idealized:1 hedging:1 root:1 start:1 minimize:3 air:1 kaufmann:1 yield:2 weak:1 bayesian:1 produced:1 comparably:1 marginally:1 carlo:1 drive:1 processor:2 simultaneous:1 against:2 energy:1 nonetheless:1 pp:3 naturally:1 proof:1 associated:1 recovers:1 ihler:1 knowledge:1 subsection:1 organized:1 cj:2 schedule:2 appears:2 focusing:1 maturation:1 formulation:7 generality:1 just:2 implicit:1 correlation:2 sketch:1 receives:1 web:1 nonlinear:3 propagation:4 mode:2 scientific:2 requiring:1 y2:4 true:1 laboratory:1 iteratively:2 white:1 game:3 during:1 illustrative:2 presenting:1 theoretic:7 performs:2 l1:4 reasoning:1 meaning:2 variational:8 wise:1 novel:1 pbp:8 common:2 specialized:2 thirdly:1 extend:1 interpretation:4 association:1 marginals:1 cambridge:1 imposing:1 unconstrained:1 u7:1 closing:1 upheld:1 longer:1 patrick:1 posterior:5 own:1 recent:1 perspective:2 optimizing:1 driven:1 reverse:3 store:5 verlag:1 binary:6 continue:1 yi:2 accomplished:1 morgan:1 minimum:1 greater:2 preceding:1 employed:1 paradigm:1 signal:1 ii:4 relates:1 multiple:1 full:1 reduces:1 seidel:1 alan:1 post:1 equally:1 kreidl:2 impact:1 basic:1 essentially:1 expectation:1 iteration:4 kernel:1 whereas:1 rest:1 posse:1 subject:4 induced:1 sent:1 undirected:1 lafferty:1 jordan:1 presence:1 ideal:1 iii:4 viability:1 enough:1 identically:1 isolation:1 associating:1 bandwidth:1 approaching:1 inner:1 simplifies:1 cn:13 reduce:1 computable:2 tradeoff:1 economic:1 absent:1 synchronous:1 whether:2 thread:1 penalty:10 render:1 passing:6 proceed:2 remark:1 u5:1 locally:1 simplest:1 generate:1 http:1 affords:1 per:6 discrete:4 four:1 threshold:7 graph:8 relaxation:5 merely:1 sum:2 uncertainty:1 named:1 extends:1 evokes:1 decision:20 summarizes:1 dy:1 bit:12 interleaved:1 guaranteed:2 simplification:1 correspondence:5 yale:1 constraint:10 bp:7 x2:3 u1:5 aspect:1 min:4 optimality:5 department:1 structured:3 unity:1 kakade:1 restricted:1 pr:2 ln:1 resource:2 equation:4 turn:2 discus:1 merit:1 available:1 operation:2 decentralized:6 permit:2 progression:1 generic:1 alternative:2 original:1 denotes:2 assumes:1 cf:1 graphical:8 seeking:1 implied:1 objective:1 question:1 realized:2 added:1 strategy:29 costly:1 dependence:1 link:1 capacity:1 ensuing:1 athena:1 argue:1 extent:1 considers:1 willsky:3 assuming:10 length:1 index:1 modeled:1 illustration:1 ratio:3 minimizing:3 providing:1 retained:1 equivalently:1 executed:2 stated:1 negative:1 rise:1 implementation:1 unknown:2 perform:2 observation:13 markov:1 commensurate:1 finite:10 descent:1 supporting:2 communication:24 team:3 y1:4 rn:4 arbitrary:1 bk:3 cast:1 required:1 pair:1 namely:1 connection:3 concisely:1 established:1 pearl:1 nip:1 u12:1 departure:1 max:5 u9:1 memory:4 belief:4 event:3 satisfaction:1 natural:1 force:1 decidedly:1 nth:1 imply:1 coupled:1 jai:1 l2:4 relative:2 loss:7 mixed:1 proportional:1 acyclic:3 aposteriori:1 ingredient:1 versus:2 degree:2 sufficient:2 principle:1 pi:2 translation:1 summary:1 supported:1 wireless:1 free:1 asynchronous:1 infeasible:2 offline:12 side:1 tsitsiklis:2 wide:1 neighbor:1 taking:2 distinctly:1 distributed:6 xn:18 evaluating:1 numeric:1 forward:4 commonly:1 made:2 collection:2 nguyen:1 approximate:1 observable:1 implicitly:1 varshney:1 global:5 active:2 incoming:1 conceptual:1 b1:1 xi:1 un:41 iterative:6 channel:1 terminate:1 expanding:1 pk:3 main:2 noise:2 child:9 repeated:2 x1:6 precision:1 explicit:3 exponential:1 candidate:3 cec:1 showing:3 mitigates:1 admits:1 virtue:2 evidence:1 fusion:1 false:1 justifies:2 chen:1 likely:1 army:1 expressed:3 u2:3 applies:1 springer:1 corresponds:1 determines:1 satisfies:3 acm:1 ma:1 conditional:3 goal:1 viewed:3 identity:2 formulated:1 towards:2 u8:1 absence:1 feasible:2 hard:1 professor:1 specifically:4 infinite:1 justify:2 lemma:8 total:1 pas:8 gauss:1 decisiontheoretic:1 allotted:1 outgoing:1 correlated:2 |
1,995 | 2,812 | Gradient Flow Independent Component
Analysis in Micropower VLSI
Abdullah Celik, Milutin Stanacevic and Gert Cauwenberghs
Johns Hopkins University, Baltimore, MD 21218
{acelik,miki,gert}@jhu.edu
Abstract
We present micropower mixed-signal VLSI hardware for real-time blind
separation and localization of acoustic sources. Gradient flow representation of the traveling wave signals acquired over a miniature (1cm diameter) array of four microphones yields linearly mixed instantaneous
observations of the time-differentiated sources, separated and localized
by independent component analysis (ICA). The gradient flow and ICA
processors each measure 3mm ? 3mm in 0.5 ?m CMOS, and consume
54 ?W and 180 ?W power, respectively, from a 3 V supply at 16 ks/s
sampling rate. Experiments demonstrate perceptually clear (12dB) separation and precise localization of two speech sources presented through
speakers positioned at 1.5m from the array on a conference room table.
Analysis of the multipath residuals shows that they are spectrally diffuse,
and void of the direct path.
1 Introduction
Time lags in acoustic wave propagation provide cues to localize an acoustic source from
observations across an array. The time lags also complicate the task of separating multiple
co-existing sources using independent component analysis (ICA), which conventionally
assumes instantaneous mixture observations.
Inspiration from biology suggests that for very small aperture (spacing between acoustic
sensors i.e., tympanal membranes), small differences (gradients) in sound pressure level
are more effective in resolving source direction than actual (microsecond scale) time differences. The remarkable auditory localization capability of certain insects at a small (1%)
fraction of the wavelength of the source owes to highly sensitive differential processing
of sound pressure through inter-tympanal mechanical coupling [1] or inter-aural coupled
neural circuits [2].
We present a mixed-signal VLSI system that operates on spatial and temporal differences
(gradients) of the acoustic field at very small aperture to separate and localize mixtures of
traveling wave sources. The real-time performance of the system is characterized through
experiments with speech sources presented through speakers in a conference room setting.
s(t)
s
ILD
?
?
x01
? s s(t + ?)
ITD
?
x-10
t
u
s(t)
x10
x0-1
(a)
?2 ?1
(b)
Figure 1: (a) Gradient flow principle. At low aperture, interaural level differences (ILD)
and interaural time differences (ITD) are directly related, scaled by the temporal derivative
of the signal. (b) 3-D localization (azimuth ? and elevation ?) of an acoustic source using
a planar geometry of four microphones.
2 Gradient Flow Independent Component Analysis
Gradient flow [3, 4] is a signal conditioning technique for source separation and localization
suited for arrays of very small aperture, i.e., of dimensions significantly smaller than the
shortest wavelength in the sources. The principle is illustrated in Figure 1 (a). Consider a
traveling acoustic wave impinging on an array of four microphones, in the configuration of
Figure 1 (b). The 3-D direction cosines of the traveling wave u are implied by propagation
delays ?1 and ?2 in the source along directions p and q in the sensor plane. Direct measurement of these delays is problematic as they require sampling in excess of the bandwidth
of the signal, increasing noise floor and power requirements. However, indirect estimates
of the delays are obtained, to first order, by relating spatial and temporal derivatives of the
acoustic field:
?10 (t) ?
?01 (t) ?
?1 ??00 (t)
?2 ??00 (t)
(1)
where ?10 and ?01 represent spatial gradients in p and q directions around the origin (p =
q = 0), ?00 the spatial common mode, and ??00 its time derivative. Estimates of ?00 , ?10 and
?01 for the sensor geometry of Figure 1 can be obtained as:
?00
?
?10
?
?01
?
?
1
4 x?1,0 + x1,0 +
?
?
1
2 x1,0 ? x?1,0
?
?
1
2 x0,1 ? x0,?1
x0,?1 + x0,1
?
(2)
A single source can be localized by estimating direction cosines ?1 and ?2 from (1), a
principle known for years in monopulse radar, exploited by parasite insects [1], and implemented in mixed-signal VLSI hardware [6]. As shown in Figure 1 (b), the planar geometry
of four microphones allows to localize a source in 3-D, with both azimuth and elevation 1 .
More significantly, multiple coexisting sources s` (t) can be jointly separated and localized
1
An alternative using two microphones, exploiting shape of the pinna, is presented in [5]
using essentially the same principle [3, 4]:
X
?00 (t) =
s` (t) + ?00 (t)
`
?10 (t) =
X
?1` s? ` (t) + ?10 (t)
(3)
`
?01 (t) =
X
?2` s? ` (t) + ?01 (t)
`
where ?00 , ?10 and ?01 represent common mode and spatial derivative components of additive noise in the sensor observations. Taking the time derivative of ?00 , we thus obtain
from the sensors a linear instantaneous mixture of the time-differentiated source signals,
?
?? 1 ? "
? ?
#
s?
1 ??? 1
?? 00
??00
?
?
.
1
L
? ?10 ? ? ? ?1 ? ? ? ?1 ? ? . ? + ?10 ,
(4)
.
?
L
?21 ? ? ? ?2L
?01
01
s?
an equation in the standard form x = As + n, where x is given and the mixing matrix A
and sources s are unknown. Ignoring the noise term n, the problem setting is standard in
Independent Component Analysis (ICA), and three independent sources can be identified
from the three gradient observations.
Various formulations of ICA exist to arrive at estimates of the unknown s and A from
observations x. ICA algorithms typically specify some sort of statistical independence assumption on the sources s either in distribution over amplitude [7] or over time [8]. Most
forms specify ICA to be static, in assuming that the observations contain static (instantaneous) linear mixtures of the sources. Note that this definition of static ICA includes
methods for blind source separation that make use of temporal structure in the dynamics
within the sources themselves [8], as long as the observed mixture of the sources is static.
In contrast, ?convolutive? ICA techniques explicitly assume convolutive or delayed mixtures in the source observations. Convolutive ICA techniques (e.g., [10]) are usually much
more involved and require a large number of parameters and long adaptation time horizons
for proper convergence.
The instantaneous static formulation of gradient flow (4) is convenient,2 and avoids the need
for non-static (convolutive)
P ICA to separate delayed mixtures of traveling wave sources (in
free space) xpq (t) = ` s` (t + p?1 + q?2 ). Reverberation in multipath wave propagation
contributes delayed mixture components in the observations which limit the effectiveness
of a static ICA formulation. As shown in the experiments below, static ICA still produces
reasonable results (12 dB of perceptually clear separation) in typical enclosed acoustic
environments (conference room).
3 Micropower VLSI Implementation
Various analog VLSI implementations of ICA exist in the literature, e.g., [11, 12], and
digital implementations using DSP are common practice in the field. By adopting a mixedsignal architecture in the implementation, we combine advantages of both approaches: an
analog datapath directly interfaces with inputs and outputs without the need for data conversion; and digital adaptation offers the flexibility of reconfigurable ICA learning rules.
2
The time-derivative in the source signals (4) is immaterial, and can be removed by timeintegrating the separated signals obtained by applying ICA directly to the gradient flow signals.
W12
W13
MULTIPLYING DAC
W21
W22
W23
?00 ?10
.
?00 ?01
?1
?2
W31
W32
W33
MULTIPLYING DAC
S/H OUTPUT BUFFERS
W11
LMS REGISTERS
ICA REGISTERS
MULTIPLYING DAC
LMS REGISTERS
(a)
(b)
Figure 2: (a) Gradient flow processor. (b) Reconfigurable ICA processor.
Dimensions of both processors are 3mm ? 3mm in 0.5 ?m CMOS technology.
?C
-1, 0, +1
W11
W12
W13
W21
W22
W23
W31
W32
W33
x2
y1
y2
Wij
y3
yj
xi
level comparison
x1
level comparison
update bits
output bits
-1, 0, +1
update bits
x3
Figure 3: Reconfigurable mixed-signal ICA architecture implementing general outerproduct forms of ICA update rules.
3.1 Gradient Flow Processor
The mixed-signal VLSI processor implementing gradient flow is presented in [6]. A micrograph of the chip is shown in Figure 2 (a). Precise analog gradients ??00 , ?10 and ?01 are acquired from the microphone signals by correlated double sampling (CDS) in fully differential switched-capacitor circuits. Least-mean-squares (LMS) cancellation of common-mode
leakage in the gradient signals further increases differential sensitivity. The adaptation
is performed in the digital domain using counting registers, and couples to the switchedcapacitor circuits using capacitive multiplying DAC arrays. An additional stage of LMS
adaptation produces digital estimates of direction cosines ?1 and ?2 for a single source.
In the present setup this stage is bypassed, and the common-mode corrected gradient signals are presented as inputs to the ICA chip for localization and separation of up to three
independent sources.
3.2 Reconfigurable ICA Processor
A general mixed-signal parallel architecture, that can be configured for implementation of
various ICA update rules in conjunction with gradient flow, is shown in Figure 3 [9]. Here
we briefly illustrate the architecture with a simple configuration designed to separate two
sources, and present CMOS circuits that implement the architecture. The micrograph of
the reconfigurable ICA chip is shown in Figure 2 (a).
3.2.1 ICA update rule
Efficient implementation in parallel architecture requires a simple form of the update rule,
that avoids excessive matrix multiplications and inversions. A variety of ICA update algorithms can be cast in a common, unifying framework of outer-product rules [9].
To obtain estimates y = ?s of the sources s, a linear transformation with matrix W is applied
to the gradient signals x, y = Wx. Diagonal terms are fixed wii ? 1, and off-diagonal
terms adapt according to
?wij = ?? f (yi )g(yj ),
i 6= j
(5)
The implemented update rule can be seen as the gradient of InfoMax [7] multiplied by
WT , rather than the natural gradient multiplication factor WT W. To obtain the full natural gradient in outer-product form, it is necessary to include a back-propagation path in
the network architecture, and thus additional silicon resources, to implement the vector
contribution yT . Other equivalences with standard ICA algorithms are outlined in [9].
3.2.2 Architecture
Level comparison provides implementation of discrete approximations of any scalar function f (y) and g(y) appearing in different learning rules. Since speech signals are approximately Laplacian distributed, the nonlinear scalar function f (y) is approximated by sign(y)
and implemented using single bit quantization. Conversely, a linear function g(y) ? y in
the learning rule is approximated by a 3-level staircase function (?1, 0, +1) using 2-bit
quantization. The quantization of the f and g terms in the update rule (5) simplifies the
implementation to that of discrete counting operations.
The functional block diagram of a 3 ? 3 outer-product incremental ICA architecture, supporting a quantized form of the general update rule (5), is shown in Figure 3 [9]. Un-mixing
coefficients are stored digitally in each cell of the architecture. The update is performed
locally by once or repeatedly incrementing, decrementing or holding the current value of
counter based on the learning rule served by the micro-controller. The 8 most significant
bits of the 14-bit counter holding and updating the coefficients are presented to a multiplying D/A capacitor array [6] to linearly unmix the separated signal. The remaining 6
bits in the coefficient registers provide flexibility in programming the update rate to tailor
convergence.
3.2.3 Circuit implementation
As in the implementation of the gradient flow processor [6], the mixed-signal ICA architecture is implemented using fully differential switched-capacitor sampled-data circuits.
Correlated double sampling performs common mode offset rejection and 1/f noise reduction. An external micro-controller provides flexibility in the implementation of different
learning rules. The ICA architecture is integrated on a single 3mm ? 3mm chip fabricated
in 0.5 ?m 3M2P CMOS technology.
The block diagram of ICA prototype in Figure 3 indicates its main functionality is a
vector(3x1)-matrix(3x3) multiplication with adaptive matrix elements.
Each cell in the implemented architecture contains a 14-bit counter, decoder and D/A capacitor arrays. Adaptation is performed in outer-product fashion by incrementing, decrementing or holding the current value of the counters. The most significant 8 bits of the
?1e
+ ^
?
yi
A
C2
WijC
(1-Wij)C
1
?2
C3
Vref
C2
?1
(1-Wij)C
x +j
WijC
y -i
C3
A
A
sgn(yi-Vth)
C4
?2
A
?^1e
?^1e
?1
Vth
?^2
?1e
x -j
Figure 4: Correlated double sampling (CDS) switched-capacitor fully differential circuits
implementing linearly weighted summing in the mixed-signal ICA architecture.
1 cm
1.5 m
1.5 m
Figure 5: Experimental setup for separation of two acoustic sources in a conference room
enviroment.
counter are presented to the multiplying D/A capacitor arrays to construct the source estimation. Figure 4 shows the circuits one output component in the architecture, linearly
summing the input contributions. The implementation of the multiplying capacitor arrays
are identical to those discussed in [6]. Each output signal yi is is computed by accumulating outputs from the all the cells in the ith row. The accumulation is performed on C2 by
switch-cap amplifier yielding the estimated signals during ?2 phase. While the estimation
? 1 by the comparator circuit. The sign of the comsignals are valid, yi + is sampled at ?
parison of yi with variable level threshold Vth is computed in the evaluate phase, through
capacitive coupling into the amplifier input node.
4 Experimental Results
To demonstrate source separation and localization in a real environment, the mixed-signal
VLSI ASICs were interfaced with four omnidirectional miniature microphones (Knowles
FG-3629), arranged in a circular array with radius 0.5 cm. At the front-end, the microphone
signals were passed through second-order bandpass filters with low-frequency cutoff at
130 Hz and high-frequency cutoff at 4.3 kHz. The signals were also amplified by a factor
of 20.
The experimental setup is shown in Figure 5. The speech signals were presented through
loudspeakers positioned at 1.5 m distance from the array. The system sampling frequency
of both chips was set to 16 kHz. A male and female speakers from TIMIT database were
chosen as sound sources. To provide the ground truth data and full characterization of the
systems, speech segments were presented individually through either loudspeaker at different time instances. The data was recorded for both speakers, archived, and presented to the
?10
5
?01
2
s^ 1
2
s^ 2
1
0
0
2
1
2
3
0
0
2
1
2
3
0
0
1
1
2
3
0
0
1
1
2
3
0
1
0
1
2
3
Frequency Frequency Frequency Frequency Frequency
?00
5
1
0.5
0
0
1
1
2
4
x 10
0.5
0
0
1
1
2
4
x 10
0.5
0
0
1
1
2
4
x 10
0.5
0
0
1
1
2
4
x 10
0.5
0
0
1
Time (s)
2
Time
4
x 10
Figure 6: Time waveforms and spectrograms of the presented sources s1 and s2 , observed
common-mode and gradient signals ?00 , ?10 and ?01 by the gradient flow chip, and recovered sources s?1 and s?2 by the ICA chip.
Table 1: Localization Performance
Single-source LMS localization
Dual-source ICA localization
Male speaker
-31.11
-30.35
Female speaker
40.95
43.55
gradient flow chip. Localization results obtained by gradient flow chip through LMS adaptation are reported in Table 1. The two recorded datasets were then added, and presented to
the gradient flow ASIC. The gradient signals obtained from the chip were then presented
to the ICA processor, configured to implement the outerproduct update algorithm in (5).
The observed convergence time was around 2 seconds. From the recorded 14-bit digital
weights, the angles of incidence of the sources relative to the array were derived. These
estimated angles are reported in Table 1. As seen, the angles obtained through LMS bearing estimation under individual source presentation are very close to the angles produced
by ICA under joint presentation of both sources. The original sources and the recorded
source signal estimates, along with recorded common-mode signal and first-order spatial
gradients, are shown in Figure 6.
5 Conclusions
We presented a mixed-signal VLSI system that operates on spatial and temporal differences
(gradients) of the acoustic field at very small aperture to separate and localize mixtures of
traveling wave sources. The real-time performance of the system was characterized through
experiments with speech sources presented through speakers in a conference room setting.
Although application of static ICA is limited by reverberation, the perceptual quality of the
separated outputs owes to the elimination of the direct path in the residuals. Miniature size
of the microphone array enclosure (1 cm diameter) and micropower consumption of the
VLSI hardware (250 ?W) are key advantages of the approach, with applications to hearing
aids, conferencing, multimedia, and surveillance.
Acknowledgments
This work was supported by grants of the Catalyst Foundation (New York), the National
Science Foundation, and the Defense Intelligence Agency.
References
[1] D. Robert, R.N. Miles, and R.R. Hoy, ?Tympanal Hearing in the Sarcophagid Parasitoid Fly Emblemasoma sp.: the Biomechanics of Directional Hearing,? J. Experimental Biology, vol. 202, pp 1865-1876, 1999.
[2] R. Reeve and B. Webb, ?New neural circuits for robot phonotaxis?, Philosophical
Transactions of the Royal Society A, vol. 361, pp. 2245-2266, 2002.
[3] G. Cauwenberghs, M. Stanacevic, and G. Zweig, ?Blind Broadband Source Localization and Separation in Miniature Sensor Arrays,? Proc. IEEE Int. Symp. Circuits and
Systems (ISCAS?2001), Sydney, Australia, May 6-9, 2001.
[4] J. Barr`ere and G. Chabriel, ?A Compact Sensor Array for Blind Separation of Sources?,
IEEE Transactions Circuits and Systems, Part I, vol. 49 (5), pp. 565-574, 2002.
[5] J.G. Harris, C.-J. Pu, J.C. Principe, ?A Neuromorphic Monaural Sound Localizer,?
Proc. Neural Inf. Proc. Sys. (NIPS*1998), Cambridge MA: MIT Press, vol. 10, pp.
692-698, 1999.
[6] G. Cauwenberghs and M. Stanacevic, ?Micropower Mixed-Signal Acoustic Localizer,?
Proc. IEEE Eur. Solid State Circuits Conf. (ESSCIRC?2003), Estoril Portugal, Sept. 1618, 2003.
[7] A.J. Bell and T.J. Sejnowski, ?An Information Maximization Approach to Blind Separation and Blind Deconvolution,? Neural Comp, vol. 7 (6), pp 1129-1159, Nov 1995.
[8] L. Molgedey and G. Schuster, ?Separation of a mixture of independent signals using
time delayed correlations,? Physical Review Letters, vol. 72, no. 23, pp. 3634?3637,
1994.
[9] A. Celik, M. Stanacevic and G. Cauwenberghs, ?Mixed-Signal Real-Time Adaptive
Blind Source Separation,? Proc. IEEE Int. Symp. Circuits and Systems (ISCAS?2004),
Vancouver Canada, May 23-26, 2004.
[10] R. Lambert and A. Bell, ?Blind separation of multiple speakers in a multipath environment,? Proc. ICASSP?97, M?unich, 1997.
[11] Cohen, M.H., Andreou, A.G. ?Analog CMOS Integration and Experimentation with
an Autoadaptive Independent Component Analyzer,? IEEE Trans. Circuits and Systems
II, vol 42 (2), pp 65-77, Feb. 1995.
[12] Gharbi, A.B.A., Salam, F.M.A. ?Implementation and Test Results of a Chip for the
Separation of Mixed Signals,? Proc. Int. Symp. Circuits and Systems (ISCAS?95), May
1995.
[13] M. Cohen and G. Cauwenberghs, ?Blind Separation of Linear Convolutive Mixtures
through Parallel Stochastic Optimization,? Proc. IEEE Int. Symp. Circuits and Systems
(ISCAS?98), Monterey CA, vol. 3, pp. 17-20, 1998.
| 2812 |@word briefly:1 inversion:1 pressure:2 solid:1 reduction:1 configuration:2 contains:1 existing:1 current:2 recovered:1 incidence:1 john:1 additive:1 datapath:1 wx:1 shape:1 designed:1 update:13 cue:1 intelligence:1 plane:1 sys:1 ith:1 provides:2 quantized:1 node:1 characterization:1 outerproduct:2 along:2 c2:3 direct:3 differential:5 supply:1 interaural:2 combine:1 symp:4 x0:5 acquired:2 inter:2 ica:37 themselves:1 actual:1 increasing:1 estimating:1 circuit:17 vref:1 cm:4 spectrally:1 transformation:1 fabricated:1 temporal:5 y3:1 scaled:1 grant:1 limit:1 w32:2 path:3 approximately:1 k:1 equivalence:1 suggests:1 conversely:1 co:1 limited:1 acknowledgment:1 yj:2 practice:1 block:2 implement:3 x3:2 jhu:1 significantly:2 bell:2 convenient:1 enclosure:1 close:1 applying:1 accumulating:1 accumulation:1 yt:1 rule:13 array:16 gert:2 programming:1 origin:1 w33:2 pinna:1 approximated:2 element:1 updating:1 database:1 observed:3 fly:1 counter:5 removed:1 digitally:1 environment:3 agency:1 dynamic:1 radar:1 immaterial:1 segment:1 localization:12 molgedey:1 icassp:1 joint:1 indirect:1 chip:11 various:3 separated:5 effective:1 wijc:2 sejnowski:1 reeve:1 parasite:1 lag:2 consume:1 m2p:1 jointly:1 advantage:2 product:4 adaptation:6 mixing:2 flexibility:3 amplified:1 exploiting:1 convergence:3 double:3 requirement:1 produce:2 cmos:5 incremental:1 coupling:2 illustrate:1 sydney:1 implemented:5 direction:6 waveform:1 radius:1 functionality:1 filter:1 stochastic:1 sgn:1 australia:1 elimination:1 implementing:3 require:2 barr:1 elevation:2 mm:6 around:2 ground:1 itd:2 lm:7 miniature:4 estimation:3 proc:8 sensitive:1 individually:1 ere:1 weighted:1 mit:1 sensor:7 rather:1 surveillance:1 conjunction:1 derived:1 dsp:1 indicates:1 contrast:1 celik:2 typically:1 integrated:1 vlsi:10 wij:4 dual:1 insect:2 spatial:7 integration:1 field:4 once:1 construct:1 sampling:6 biology:2 identical:1 excessive:1 micro:2 national:1 individual:1 delayed:4 geometry:3 phase:2 iscas:4 amplifier:2 highly:1 circular:1 male:2 mixture:11 yielding:1 coexisting:1 necessary:1 owes:2 instance:1 phonotaxis:1 neuromorphic:1 maximization:1 hearing:3 stanacevic:4 delay:3 azimuth:2 front:1 stored:1 reported:2 eur:1 sensitivity:1 off:1 infomax:1 hopkins:1 recorded:5 external:1 conf:1 derivative:6 unmix:1 archived:1 includes:1 coefficient:3 int:4 configured:2 explicitly:1 register:5 blind:9 performed:4 cauwenberghs:5 wave:8 sort:1 capability:1 parallel:3 enviroment:1 timit:1 contribution:2 square:1 yield:1 interfaced:1 directional:1 lambert:1 produced:1 multiplying:7 served:1 comp:1 w21:2 processor:9 complicate:1 definition:1 frequency:8 involved:1 pp:8 static:9 couple:1 sampled:2 auditory:1 w23:2 cap:1 amplitude:1 positioned:2 back:1 planar:2 specify:2 formulation:3 arranged:1 stage:2 correlation:1 traveling:6 dac:4 ild:2 nonlinear:1 propagation:4 mode:7 quality:1 contain:1 xpq:1 y2:1 staircase:1 gharbi:1 inspiration:1 omnidirectional:1 mile:1 illustrated:1 during:1 speaker:8 cosine:3 demonstrate:2 performs:1 interface:1 instantaneous:5 common:9 functional:1 physical:1 cohen:2 conditioning:1 khz:2 analog:4 discussed:1 relating:1 measurement:1 silicon:1 significant:2 cambridge:1 outlined:1 portugal:1 cancellation:1 analyzer:1 aural:1 robot:1 pu:1 feb:1 female:2 inf:1 asic:1 certain:1 buffer:1 hoy:1 yi:6 exploited:1 seen:2 additional:2 floor:1 spectrogram:1 shortest:1 signal:37 ii:1 resolving:1 multiple:3 sound:4 full:2 x10:1 characterized:2 adapt:1 offer:1 long:2 w13:2 biomechanics:1 zweig:1 laplacian:1 controller:2 essentially:1 represent:2 adopting:1 cell:3 spacing:1 baltimore:1 void:1 diagram:2 source:49 hz:1 db:2 flow:17 capacitor:7 effectiveness:1 w31:2 counting:2 variety:1 independence:1 switch:1 architecture:15 bandwidth:1 identified:1 simplifies:1 prototype:1 defense:1 passed:1 speech:6 york:1 repeatedly:1 monterey:1 clear:2 locally:1 hardware:3 diameter:2 exist:2 problematic:1 sign:2 estimated:2 discrete:2 vol:8 key:1 four:5 threshold:1 localize:4 micrograph:2 cutoff:2 fraction:1 year:1 angle:4 letter:1 tailor:1 arrive:1 reasonable:1 knowles:1 separation:16 w12:2 bit:11 esscirc:1 abdullah:1 w22:2 x2:1 conferencing:1 diffuse:1 loudspeaker:2 according:1 membrane:1 across:1 smaller:1 s1:1 equation:1 resource:1 end:1 wii:1 operation:1 experimentation:1 multiplied:1 multipath:3 differentiated:2 appearing:1 alternative:1 original:1 capacitive:2 assumes:1 remaining:1 include:1 unifying:1 society:1 leakage:1 implied:1 added:1 md:1 diagonal:2 gradient:32 distance:1 separate:4 separating:1 decoder:1 outer:4 consumption:1 assuming:1 asics:1 setup:3 webb:1 robert:1 holding:3 reverberation:2 implementation:13 proper:1 unknown:2 conversion:1 observation:9 datasets:1 supporting:1 precise:2 y1:1 monaural:1 salam:1 canada:1 cast:1 mechanical:1 c3:2 philosophical:1 andreou:1 c4:1 acoustic:12 nip:1 trans:1 usually:1 below:1 convolutive:5 royal:1 power:2 natural:2 residual:2 w11:2 technology:2 conventionally:1 coupled:1 sept:1 review:1 literature:1 multiplication:3 vancouver:1 relative:1 catalyst:1 fully:3 mixed:14 enclosed:1 localized:3 remarkable:1 digital:5 foundation:2 switched:3 x01:1 principle:4 cd:2 row:1 supported:1 free:1 taking:1 fg:1 distributed:1 dimension:2 valid:1 avoids:2 adaptive:2 transaction:2 excess:1 nov:1 compact:1 aperture:5 summing:2 decrementing:2 xi:1 un:1 table:4 ca:1 bypassed:1 ignoring:1 tympanal:3 contributes:1 bearing:1 domain:1 impinging:1 sp:1 main:1 linearly:4 incrementing:2 noise:4 s2:1 x1:4 broadband:1 fashion:1 aid:1 localizer:2 bandpass:1 perceptual:1 reconfigurable:5 offset:1 micropower:5 deconvolution:1 quantization:3 perceptually:2 horizon:1 rejection:1 suited:1 wavelength:2 vth:3 scalar:2 parison:1 truth:1 harris:1 ma:1 comparator:1 presentation:2 microsecond:1 room:5 typical:1 operates:2 corrected:1 wt:2 microphone:9 multimedia:1 experimental:4 principe:1 evaluate:1 schuster:1 correlated:3 |
1,996 | 2,813 | On the Accuracy of Bounded Rationality:
How Far from Optimal Is Fast and Frugal?
Michael Schmitt
Ludwig-Marum-Gymnasium
Schlossgartenstra?e 11
76327 Pfinztal, Germany
[email protected]
Laura Martignon
Institut f?ur Mathematik und Informatik
P?adagogische Hochschule Ludwigsburg
Reuteallee 46, 71634 Ludwigsburg, Germany
[email protected]
Abstract
Fast and frugal heuristics are well studied models of bounded rationality. Psychological research has proposed the take-the-best heuristic as a
successful strategy in decision making with limited resources. Take-thebest searches for a sufficiently good ordering of cues (features) in a task
where objects are to be compared lexicographically. We investigate the
complexity of the problem of approximating optimal cue permutations
for lexicographic strategies. We show that no efficient algorithm can approximate the optimum to within any constant factor, if P 6= NP. We
further consider a greedy approach for building lexicographic strategies
and derive tight bounds for the performance ratio of a new and simple
algorithm. This algorithm is proven to perform better than take-the-best.
1
Introduction
In many circumstances the human mind has to make decisions when time and knowledge
are limited. Cognitive psychology categorizes human judgments made under such constraints as being boundedly rational if they are ?satisficing? (Simon, 1982) or, more generally, if they do not fall too far behind the rational standards. A class of models for human
reasoning studied in the context of bounded rationality consists of simple algorithms termed
?fast and frugal heuristics?. These were the topic of major psychological research (Gigerenzer and Goldstein, 1996; Gigerenzer et al., 1999). Great efforts have been put into testing
these heuristics by empirical means in experiments with human subjects (Br?oder, 2000;
Br?oder and Schiffer, 2003; Lee and Cummins, 2004; Newell and Shanks, 2003; Newell
et al., 2003; Slegers et al., 2000) or in simulations on computers (Br?oder, 2002; Hogarth
and Karelaia, 2003; Nellen, 2003; Todd and Dieckmann, 2005). (See also the discussion
and controversies documented in the open peer commentaries on Todd and Gigerenzer,
2000.)
Among the fast and frugal heuristics there is an algorithm called ?take-the-best? (TTB)
that is considered a process model for human judgments based on one-reason decision
making. Which of the two cities has a larger population: (a) D?usseldorf (b) Hamburg?
This is the task originally studied by Gigerenzer and Goldstein (1996) where German cities
with a population of more than 100,000 inhabitants had to be compared. The available
information on each city consists of the values of nine binary cues, or attributes, indicating
Hamburg
Essen
D?usseldorf
Validity
Soccer Team
1
0
0
1
State Capital
1
0
1
1/2
License Plate
0
1
1
0
Table 1: Part of the German cities task of Gigerenzer and Goldstein (1996). Shown are
profiles and validities of three cues for three cities. Cue validities are computed from the
data as given here. The original data has different validities but the same cue ranking.
presence or absence of a feature. The cues being used are, for instance, whether the city is
a state capital, whether it is indicated on car license plates by a single letter, or whether it
has a soccer team in the national league. The judgment which city is larger is made on the
basis of the two binary vectors, or cue profiles, representing the two cities. TTB performs
a lexicographic strategy, comparing the cues one after the other and using the first cue that
discriminates as the one reason to yield the final decision. For instance, if one city has
a university and the other does not, TTB would infer that the first city is larger than the
second. If the cue values of both cities are equal, the algorithm passes on to the next cue.
TTB examines the cues in a certain order. Gigerenzer and Goldstein (1996) introduced
ecological validity as a numerical measure for ranking the cues. The validity of a cue is
a real number in the interval [0, 1] that is computed in terms of the known outcomes of
paired comparisons. It is defined as the number of pairs the cue discriminates correctly
(i.e., where it makes a correct inference) divided by the number of pairs it discriminates
(i.e., where it makes an inference, be it right or wrong). TTB always chooses a cue with
the highest validity, that is, it ?takes the best? among those cues not yet considered. Table 1
shows cue profiles and validities for three cities. The ordering defined by the size of their
population is given by
{h D?usseldorf , Essen i, h D?usseldorf , Hamburg i, h Essen , Hamburg i},
where a pair ha, bi indicates that a has less inhabitants than b. As an example for calculating
the validity, the state-capital cue distinguishes the first and the third pair but is correct only
on the latter. Hence, its validity has value 1/2.
The order in which the cues are ranked is crucial for success or failure of TTB. In the example of D?usseldorf and Hamburg, the car-license-plate cue would yield that D?usseldorf (D)
is larger than Hamburg (HH), whereas the soccer-team cue would correctly favor Hamburg.
Thus, how successful a lexicographic strategy is in a comparison task consisting of a partial ordering of cue profiles depends on how well the cue ranking minimizes the number of
incorrect comparisons. Specifically, the accuracy of TTB relies on the degree of optimality
achieved by the ranking according to decreasing cue validities. For TTB and the German
cities task, computer simulations have shown that TTB discriminates at least as accurate as
other models (Gigerenzer and Goldstein, 1996; Gigerenzer et al., 1999; Todd and Dieckmann, 2005). TTB made as many correct inferences as standard algorithms proposed by
cognitive psychology and even outperformed some of them.
Partial results concerning the accuracy of TTB compared to the accuracy of other strategies have been obtained analytically by Martignon and Hoffrage (2002). Here we subject
the problem of finding optimal cue orderings to a rigorous theoretical analysis employing
methods from the theory of computational complexity (Ausiello et al., 1999). Obviously,
TTB runs in polynomial time. Given a list of ordered pairs, it computes all cue validities
in polynomially many computing steps in terms of the size of the list. We define the optimization problem M INIMUM I NCORRECT L EXICOGRAPHIC S TRATEGY as the task of
minimizing the number of incorrect inferences for the lexicographic strategy on a given list
of pairs. We show that, unless P = NP, there is no polynomial-time approximation algo-
rithm that computes solutions for M INIMUM I NCORRECT L EXICOGRAPHIC S TRATEGY
that are only a constant factor worse than the optimum, unless P = NP. This means that
the approximating factor, or performance ratio, must grow with the size of the problem.
As an extension of TTB we consider an algorithm for finding cue orderings that was called
?TTB by Conditional Validity? in the context of bounded rationality. It is based on the
greedy method, a principle widely used in algorithm design. This greedy algorithm runs
in polynomial time and we derive tight bounds for it, showing that it approximates the
optimum with a performance ratio proportional to the number of cues. An important consequence of this result is a guarantee that for those instances that have a solution that discriminates all pairs correctly, the greedy algorithm always finds a permutation attaining this
minimum. We are not aware that this quality has been established for any of the previously
studied heuristics for paired comparison. In addition, we show that TTB does not have this
property, concluding that the greedy method of constructing cue permutations performs
provably better than TTB. For a more detailed account and further results we refer to the
complete version of this work (Schmitt and Martignon, 2006).
2
Lexicographic Strategies
A lexicographic strategy is a method for comparing elements of a set B ? {0, 1}n . Each
component 1, . . . , n of these vectors is referred to as a cue. Given a, b ? B, where a =
(a1 , . . . , an ) and b = (b1 , . . . , bn ), the lexicographic strategy searches for the smallest
cue index i ? {1, . . . , n} such that ai and bi are different. The strategy then outputs one
of ? < ? or ? > ? according to whether ai < bi or ai > bi assuming the usual order
0 < 1 of the truth values. If no such cue exists, the strategy returns ? = ?. Formally, let
diff : B ? B ? {1, . . . , n + 1} be the function where diff(a, b) is the smallest cue index
on which a and b are different, or n + 1 if they are equal, that is,
diff(a, b)
=
min{{i : ai 6= bi } ? {n + 1}}.
Then, the function S : B ? B ? {? < ?, ? = ?, ? > ?} computed by the lexicographic
strategy is
?
? ? < ? if diff(a, b) ? n and adiff(a,b) < bdiff(a,b) ,
? > ? if diff(a, b) ? n and adiff(a,b) > bdiff(a,b) ,
S(a, b) =
? ? = ? otherwise.
Lexicographic strategies may take into account that the cues come in an order that is different from 1, . . . , n. Let ? : {1, . . . , n} ? {1, . . . , n} be a permutation of the cues. It
gives rise to a mapping ? : {0, 1}n ? {0, 1}n that permutes the components of Boolean
vectors by ?(a1 , . . . , an ) = (a?(1) , . . . , a?(n) ). As ? is uniquely defined given ?, we simplify the notation and write also ? for ?. The lexicographic strategy under cue permutation
? passes through the cues in the order ?(1), . . . , ?(n), that is, it computes the function
S? : B ? B ? {? < ?, ? = ?, ? > ?} defined as
S? (a, b)
= S(?(a), ?(b)).
The problem we study is that of finding a cue permutation that minimizes the number of
incorrect comparisons in a given list of element pairs using the lexicographic strategy. An
instance of this problem consists of a set B of elements and a set of pairs L ? B ? B. Each
pair ha, bi ? L represents an inequality a ? b. Given a cue permutation ?, we say that the
lexicographic strategy under ? infers the pair ha, bi correctly if S? (a, b) ? {? < ?, ? = ?},
otherwise the inference is incorrect. The task is to find a permutation ? such that the number
of incorrect inferences in L using S? is minimal, that is, a permutation ? that minimizes
INCORRECT(?, L)
= |{ha, bi ? L : S? (a, b) = ? > ?}|.
3
Approximability of Optimal Cue Permutations
A large class of optimization problems, denoted APX, can be solved efficiently if the solution is required to be only a constant factor worse than the optimum (see, e.g., Ausiello
et al., 1999). Here, we prove that, if P 6= NP, there is no polynomial-time algorithm whose
solutions yield a number of incorrect comparisons that is by at most a constant factor larger
than the minimal number possible. It follows that the problem of approximating the optimal cue permutation is even harder than any problem in APX. The optimization problem
is formally stated as follows.
M INIMUM I NCORRECT L EXICOGRAPHIC S TRATEGY
Instance: A set B ? {0, 1}n and a set L ? B ? B.
Solution: A permutation ? of the cues of B.
Measure: The number of incorrect inferences in L for the lexicographic strategy under cue permutation ?, that is, INCORRECT(?, L).
Given a real number r > 0, an algorithm is said to approximate M INIMUM I NCORRECT
L EXICOGRAPHIC S TRATEGY to within a factor of r if for every instance (B, L) the algorithm returns a permutation ? such that
INCORRECT(?, L)
? r ? opt(L),
where opt(L) is the minimal number of incorrect comparisons achievable on L by any
permutation. The factor r is also known as the performance ratio of the algorithm. The
following optimization problem plays a crucial role in the derivation of the lower bound
for the approximability of M INIMUM I NCORRECT L EXICOGRAPHIC S TRATEGY.
M INIMUM H ITTING S ET
Instance: A collection C of subsets of a finite set U .
Solution: A hitting set for C, that is, a subset U ? ? U such that U ? contains at
least one element from each subset in C.
Measure: The cardinality of the hitting set, that is, |U ? |.
M INIMUM H ITTING S ET is equivalent to M INIMUM S ET C OVER. Bellare et al. (1993)
have shown that M INIMUM S ET C OVER cannot be approximated in polynomial time to
within any constant factor, unless P = NP. Thus, if P 6= NP, M INIMUM H ITTING S ET
cannot be approximated in polynomial time to within any constant factor as well.
Theorem 1. For every r, there is no polynomial-time algorithm that approximates M INI MUM I NCORRECT L EXICOGRAPHIC S TRATEGY to within a factor of r, unless P = NP.
Proof. We show that the existence of a polynomial-time algorithm that approximates M IN IMUM I NCORRECT L EXICOGRAPHIC S TRATEGY to within some constant factor implies
the existence of a polynomial-time algorithm that approximates M INIMUM H ITTING S ET
to within the same factor. Then the statement follows from the equivalence of M INIMUM
H ITTING S ET with M INIMUM S ET C OVER and the nonapproximability of the latter (Bellare et al., 1993). The main part of the proof consists in establishing a specific approximation preserving reduction, or AP-reduction, from M INIMUM H ITTING S ET to M INIMUM
I NCORRECT L EXICOGRAPHIC S TRATEGY. (See Ausiello et al., 1999, for a definition of
the AP-reduction.).
We first define a function f that is computable in polynomial time and maps each instance
of M INIMUM H ITTING S ET to an instance of M INIMUM I NCORRECT L EXICOGRAPHIC
S TRATEGY. Let 1 denote the n-bit vector with a 1 everywhere and 1i1 ,...,i? the vector
with 0 in positions i1 , . . . , i? and 1 elsewhere. Given the collection C of subsets of the set
U = {u1 , . . . , un }, the function f maps C to (B, L), where B ? {0, 1}n+1 is defined as
follows:
1. Let (1, 0) ? B.
2. For i = 1, . . . , n, let (1i , 1) ? B.
3. For every {ui1 , . . . , ui? } ? C, let (1i1 ,...,i? , 1) ? B.
Further, the set L is constructed as
L = {h(1, 0), (1i , 1)i : i = 1, . . . , n}?{h(1i1 ,...,i? , 1), (1, 0)i : {ui1 , . . . , ui? } ? C}. (1)
In the following, a pair from the first and second set on the right-hand side of equation (1)
is referred to as an element pair and a subset pair, respectively. Obviously, the function f
is computable in polynomial time. It has the following property.
Claim 1. Let f (C) = (B, L). If C has a hitting set of cardinality k or less then f (C) has
a cue permutation ? where INCORRECT(?, L) ? k.
To prove this, assume without loss of generality that C has a hitting set U ? of cardinality
exactly k, say U ? = {uj1 , . . . , ujk }, and let U \ U ? = {ujk+1 , . . . , ujn }. Then the cue
permutation
j1 , . . . , jk , n + 1, jk+1 , . . . , jn .
results in no more than k incorrect inferences in L. Indeed, consider an arbitrary subset
pair h(1i1 ,...,i? , 1), (1, 0)i. To not be an error, one of i1 , . . . , i? must occur in the hitting
set j1 , . . . , jk . Hence, the first cue that distinguishes this pair has value 0 in (1i1 ,...,i? , 1)
and value 1 in (1, 0), resulting in a correct comparison. Further, let h(1, 0), (1i , 1)i be an
element pair with ui 6? U ? . This pair is distinguished correctly by cue n + 1. Finally,
each element pair h(1, 0), (1i , 1)i with ui ? U ? is distinguished by cue i with a result
that disagrees with the ordering given by L. Thus, only element pairs with ui ? U ? yield
incorrect comparisons and no subset pair. Hence, the number of incorrect inferences is not
larger than |U ? |.
Next, we define a polynomial-time computable function g that maps each collection C of
subsets of a finite set U and each cue permutation ? for f (C) to a subset of U . Given that
f (C) = (B, L), the set g(C, ?) ? U is defined as follows:
1. For every element pair h(1, 0), (1i , 1)i ? L that is compared incorrectly by ?, let
ui ? g(C, ?).
2. For every subset pair h(1i1 ,...,i? , 1), (1, 0)i ? L that is compared incorrectly by ?,
let one of the elements ui1 , . . . , ui? ? g(C, ?).
Clearly, the function g is computable in polynomial time. It satisfies the following condition.
Claim 2. Let f (C) = (B, L). If INCORRECT(?, L) ? k then g(C, ?) is a hitting set of
cardinality k or less for C.
Obviously, if INCORRECT(?, L) ? k then g(C, ?) has cardinality at most k. To show that
it is a hitting set, assume the subset {ui1 , . . . , ui? } ? C is not hit by g(C, ?). Then neither
of ui1 , . . . , ui? is in g(C, ?). Hence, we have correct comparisons for the element pairs
corresponding to ui1 , . . . , ui? and for the subset pair corresponding to {ui1 , . . . , ui? }. As
the subset pair is distinguished correctly, one of the cues i1 , . . . , i? must be ranked before
cue n + 1. But then at least one of the element pairs for ui1 , . . . , ui? yields an incorrect
comparison. This contradicts the assertion that the comparisons for these element pairs are
all correct. Thus, g(C, ?) is a hitting set and the claim is established.
Assume now that there exists a polynomial-time algorithm A that approximates M INIMUM
I NCORRECT L EXICOGRAPHIC S TRATEGY to within a factor of r. Consider the algorithm
that, for a given instance C of M INIMUM H ITTING S ET as input, calls algorithm A with
input (B, L) = f (C), and returns g(C, ?) where ? is the output provided by A. Clearly,
this new algorithm runs in polynomial time. We show that it approximates M INIMUM
Algorithm 1 G REEDY C UE P ERMUTATION
Input: a set B ? {0, 1}n and a set L ? B ? B
Output: a cue permutation ? for n cues
I := {1, . . . , n};
for i = 1, . . . , n do
let j ? I be a cue where INCORRECT(j, L) = minj ? ?I INCORRECT(j ? , L);
?(i) := j;
I := I \ {j};
L := L \ {ha, bi : aj 6= bj }
end for.
H ITTING S ET to within a factor of r. By the assumed approximation property of algorithm
A, we have
INCORRECT(?, L)
? r ? opt(L).
Together with Claim 2, this implies that g(?, C) is a hitting set for C satisfying
|g(C, ?)|
? r ? opt(L).
From Claim 1 we obtain opt(L) ? opt(C) and, thus,
|g(C, ?)|
? r ? opt(C).
Thus, the proposed algorithm for M INIMUM H ITTING S ET violates the approximation
lower bound that holds for this problem under the assumption P 6= NP. This proves the
statement of the theorem.
4
Greedy Approximation of Optimal Cue Permutations
The so-called greedy approach to the solution of an approximation problem is helpful when
it is not known which algorithm performs best. It is a simple heuristic that in practice often
provides satisfactory solutions in many situations. The algorithm G REEDY C UE P ERMU TATION that we introduce here is based on the greedy method. The idea is to select the
first cue according to which single cue makes a minimum number of incorrect inferences
(choosing one arbitrarily if there are two or more). After that the algorithm removes those
pairs that are distinguished by the selected cue, which is reasonable as the distinctions
drawn by this cue cannot be undone by later cues. This procedure is then repeated on the
set of pairs left. The description of G REEDY C UE P ERMUTATION is given as Algorithm 1.
It employs an extension of the function INCORRECT applicable to single cues, such that
for a cue i we have
INCORRECT(i, L)
= |{ha, bi ? L : ai > bi }|.
It is evident that Algorithm 1 runs in polynomial time, but how good is it? The least one
should demand from a good heuristic is that, whenever a minimum of zero is attainable,
it finds such a solution. This is indeed the case with G REEDY C UE P ERMUTATION as
we show in the following result. Moreover, it asserts a general performance ratio for the
approximation of the optimum.
Theorem 2. The algorithm G REEDY C UE P ERMUTATION approximates M INIMUM I N CORRECT L EXICOGRAPHIC S TRATEGY to within a factor of n, where n is the number of
cues. In particular, it always finds a cue permutation with no incorrect inferences if one
exists.
Proof. We show by induction on n that the permutation returned by the algorithm makes
a number of incorrect inferences no larger than n ? opt(L). If n = 1, the optimal cue
h 001 , 010 i
h 010 , 100 i
h 010 , 101 i
h 100 , 111 i
Figure 1: A set of lexicographically ordered pairs with nondecreasing cue validities (1, 1/2,
and 2/3). The cue ordering of TTB (1, 3, 2) causes an incorrect inference on the first pair.
By Theorem 2, G REEDY C UE P ERMUTATION finds the lexicographic ordering.
permutation is definitely found. Let n > 1. Clearly, as the incorrect inferences of a cue
cannot be reversed by other cues, there is a cue j with
INCORRECT(j, L) ? opt(L).
The algorithm selects such a cue in the first round of the loop. During the rest of the
rounds, a permutation of n ? 1 cues is constructed for the set of remaining pairs. Let
j be the cue that is chosen in the first round, I ? = {1, . . . , j ? 1, j + 1, . . . , n}, and
L? = L \ {ha, bi : aj 6= bj }. Further, let optI ? (L? ) denote the minimum number of
incorrect inferences taken over the permutations of I ? on the set L? . Then, we observe that
opt(L) ? opt(L? ) = optI ? (L? ).
The inequality is valid because of L ? L? . (Note that opt(L? ) refers to the minimum taken
over the permutations of all cues.) The equality holds as cue j does not distinguish any pair
in L? . By the induction hypothesis, rounds 2 to n of the loop determine a cue permutation ? ?
with INCORRECT(? ? , L? ) ? (n ? 1) ? optI ? (L? ). Thus, the number of incorrect inferences
made by the permutation ? finally returned by the algorithm satisfies
INCORRECT(?, L) ? INCORRECT(j, L) + (n ? 1) ? optI ? (L? ),
which is, by the inequalities derived above, not larger than opt(L) + (n ? 1) ? opt(L) as
stated.
Corollary 3. On inputs that have a cue ordering without incorrect comparisons under the
lexicographic strategy, G REEDY C UE P ERMUTATION can be better than TTB.
Proof. Figure 1 shows a set of four lexicographically ordered pairs. According to Theorem 2, G REEDY C UE P ERMUTATION comes up with the given permutation of the cues.
The validities are 1, 1/2, and 2/3. Thus, TTB ranks the cues as 1, 3, 2 whereupon the first
pair is inferred incorrectly.
Finally, we consider lower bounds on the performance ratio of G REEDY C UE P ERMUTA TION . The proof of this claim is omitted here.
Theorem 4. The performance ratio of G REEDY C UE P ERMUTATION is at least
max{n/2, |L|/2}.
5
Conclusions
The result that the optimization problem M INIMUM I NCORRECT L EXICOGRAPHIC
S TRATEGY cannot be approximated in polynomial time to within any constant factor answers a long-standing question of psychological research into models of bounded rationality: How accurate are fast and frugal heuristics? It follows that no fast, that is, polynomialtime, algorithm can approximate the optimum well, under the widely accepted assumption
that P 6= NP. A further question is concerned with a specific fast and frugal heuristic: How
accurate is TTB? The new algorithm G REEDY C UE P ERMUTATION has been shown to perform provably better than TTB. In detail, it always finds accurate solutions when they exist,
in contrast to TTB. With this contribution we pose a challenge to cognitive psychology: to
study the relevance of the greedy method as a model for bounded rationality.
Acknowledgment. The first author has been supported in part by the Deutsche
Forschungsgemeinschaft (DFG).
References
Ausiello, G., Crescenzi, P., Gambosi, G., Kann, V., Marchetti-Spaccamela, A., and Protasi, M.
(1999). Complexity and Approximation: Combinatorial Problems and Their Approximability
Properties. Springer-Verlag, Berlin.
Bellare, M., Goldwasser, S., Lund, C., and Russell, A. (1993). Efficient probabilistically checkable
proofs and applications to approximation. In Proceedings of the 25th Annual ACM Symposium on
Theory of Computing, pages 294?304. ACM Press, New York, NY.
Br?oder, A. (2000). Assessing the empirical validity of the ?take-the-best? heuristic as a model of
human probabilistic inference. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 26:1332?1346.
Br?oder, A. (2002). Take the best, Dawes? rule, and compensatory decision strategies: A regressionbased classification method. Quality & Quantity, 36:219?238.
Br?oder, A. and Schiffer, S. (2003). Take the best versus simultaneous feature matching: Probabilistic
inferences from memory and effects of representation format. Journal of Experimental Psychology: General, 132:277?293.
Gigerenzer, G. and Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded
rationality. Psychological Review, 103:650?669.
Gigerenzer, G., Todd, P. M., and the ABC Research Group (1999). Simple Heuristics That Make Us
Smart. Oxford University Press, New York, NY.
Hogarth, R. M. and Karelaia, N. (2003). ?Take-the-best? and other simple strategies: Why and
when they work ?well? in binary choice. DEE Working Paper 709, Universitat Pompeu Fabra,
Barcelona.
Lee, M. D. and Cummins, T. D. R. (2004). Evidence accumulation in decision making: Unifying the
?take the best? and the ?rational? models. Psychonomic Bulletin & Review, 11:343?352.
Martignon, L. and Hoffrage, U. (2002). Fast, frugal, and fit: Simple heuristics for paired comparison.
Theory and Decision, 52:29?71.
Nellen, S. (2003). The use of the ?take the best? heuristic under different conditions, modeled with
ACT-R. In Detje, F., D?orner, D., and Schaub, H., editors, Proceedings of the Fifth International
Conference on Cognitive Modeling, pages 171?176, Universit?atsverlag Bamberg, Bamberg.
Newell, B. R. and Shanks, D. R. (2003). Take the best or look at the rest? Factors influencing
?One-Reason? decision making. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 29:53?65.
Newell, B. R., Weston, N. J., and Shanks, D. R. (2003). Empirical tests of a fast-and-frugal heuristic:
Not everyone ?takes-the-best?. Organizational Behavior and Human Decision Processes, 91:82?
96.
Schmitt, M. and Martignon, L. (2006). On the complexity of learning lexicographic strategies. Journal of Machine Learning Research, 7(Jan):55?83.
Simon, H. A. (1982). Models of Bounded Rationality, Volume 2. MIT Press, Cambridge, MA.
Slegers, D. W., Brake, G. L., and Doherty, M. E. (2000). Probabilistic mental models with continuous
predictors. Organizational Behavior and Human Decision Processes, 81:98?114.
Todd, P. M. and Dieckmann, A. (2005). Heuristics for ordering cue search in decision making. In
Saul, L. K., Weiss, Y., and Bottou, L., editors, Advances in Neural Information Processing Systems
17, pages 1393?1400. MIT Press, Cambridge, MA.
Todd, P. M. and Gigerenzer, G. (2000). Pr?ecis of ?Simple Heuristics That Make Us Smart?. Behavioral and Brain Sciences, 23:727?741.
| 2813 |@word version:1 achievable:1 polynomial:17 open:1 checkable:1 simulation:2 bn:1 dieckmann:3 attainable:1 harder:1 reduction:3 contains:1 com:1 comparing:2 yet:1 must:3 ecis:1 numerical:1 j1:2 remove:1 cue:82 greedy:9 selected:1 dawes:1 mental:1 provides:1 constructed:2 symposium:1 incorrect:35 consists:4 prove:2 behavioral:1 introduce:1 indeed:2 behavior:2 brain:1 decreasing:1 cardinality:5 provided:1 bounded:8 notation:1 moreover:1 deutsche:1 minimizes:3 finding:3 guarantee:1 every:5 act:1 exactly:1 universit:1 wrong:1 hit:1 before:1 influencing:1 todd:6 tation:1 consequence:1 oxford:1 establishing:1 opti:4 ap:2 studied:4 equivalence:1 limited:2 bi:12 acknowledgment:1 testing:1 practice:1 procedure:1 jan:1 empirical:3 undone:1 matching:1 refers:1 cannot:5 put:1 context:2 whereupon:1 accumulation:1 equivalent:1 map:3 brake:1 permutes:1 examines:1 rule:1 pompeu:1 population:3 rationality:8 play:1 hypothesis:1 element:13 approximated:3 jk:3 satisfying:1 role:1 solved:1 ordering:10 russell:1 highest:1 discriminates:5 und:1 complexity:4 inimum:23 ui:12 controversy:1 tight:2 gigerenzer:11 algo:1 imum:1 smart:2 basis:1 derivation:1 fast:10 outcome:1 choosing:1 peer:1 whose:1 heuristic:17 larger:8 widely:2 say:2 otherwise:2 favor:1 apx:2 nondecreasing:1 final:1 obviously:3 loop:2 ludwig:1 schaub:1 description:1 asserts:1 optimum:6 assessing:1 object:1 derive:2 pose:1 come:2 implies:2 correct:7 attribute:1 human:8 violates:1 opt:14 extension:2 hold:2 sufficiently:1 considered:2 great:1 mapping:1 bj:2 cognition:2 claim:6 major:1 smallest:2 omitted:1 hoffrage:2 outperformed:1 applicable:1 combinatorial:1 city:13 mit:2 clearly:3 lexicographic:17 always:4 ttb:22 probabilistically:1 corollary:1 derived:1 rank:1 indicates:1 contrast:1 rigorous:1 helpful:1 inference:18 i1:9 germany:2 provably:2 selects:1 among:2 classification:1 denoted:1 equal:2 categorizes:1 aware:1 represents:1 look:1 uj1:1 np:9 simplify:1 employ:1 distinguishes:2 national:1 dfg:1 consisting:1 investigate:1 essen:3 behind:1 dee:1 accurate:4 partial:2 institut:1 unless:4 theoretical:1 minimal:3 psychological:4 instance:10 modeling:1 boolean:1 assertion:1 organizational:2 subset:13 predictor:1 successful:2 too:1 universitat:1 hochschule:1 answer:1 chooses:1 definitely:1 international:1 standing:1 lee:2 probabilistic:3 michael:1 together:1 worse:2 cognitive:4 laura:1 return:3 account:2 de:1 attaining:1 ranking:4 depends:1 later:1 tion:1 simon:2 polynomialtime:1 contribution:1 accuracy:4 efficiently:1 judgment:3 yield:5 informatik:1 simultaneous:1 minj:1 whenever:1 orner:1 definition:1 failure:1 martignon:6 proof:6 rational:3 knowledge:1 car:2 infers:1 goldstein:6 originally:1 kann:1 spaccamela:1 wei:1 generality:1 hand:1 working:1 aj:2 quality:2 indicated:1 building:1 effect:1 validity:16 hence:4 analytically:1 equality:1 ncorrect:11 satisfactory:1 round:4 during:1 ue:11 uniquely:1 soccer:3 ini:1 plate:3 evident:1 complete:1 doherty:1 performs:3 hogarth:2 reasoning:2 psychonomic:1 volume:1 approximates:7 refer:1 cambridge:2 ai:5 league:1 had:1 termed:1 hamburg:7 certain:1 verlag:1 ecological:1 inequality:3 binary:3 arbitrarily:1 success:1 preserving:1 minimum:5 commentary:1 determine:1 cummins:2 infer:1 inhabitant:2 lexicographically:3 karelaia:2 long:1 divided:1 concerning:1 paired:3 a1:2 circumstance:1 ui1:8 achieved:1 whereas:1 addition:1 interval:1 grow:1 crucial:2 boundedly:1 rest:2 pass:2 subject:2 call:1 presence:1 forschungsgemeinschaft:1 concerned:1 fit:1 psychology:6 ujk:2 idea:1 goldwasser:1 br:6 computable:4 whether:4 effort:1 returned:2 york:2 nine:1 cause:1 oder:6 generally:1 detailed:1 bellare:3 ph:1 ujn:1 documented:1 exist:1 correctly:6 write:1 group:1 four:1 license:3 capital:3 drawn:1 neither:1 run:4 letter:1 everywhere:1 reasonable:1 decision:11 bit:1 bound:5 shank:3 distinguish:1 annual:1 occur:1 constraint:1 ermutation:9 u1:1 optimality:1 concluding:1 min:1 approximability:3 format:1 according:4 contradicts:1 ur:1 making:5 pr:1 taken:2 resource:1 equation:1 mathematik:1 previously:1 german:3 hh:1 mind:1 end:1 available:1 observe:1 distinguished:4 existence:2 original:1 jn:1 remaining:1 unifying:1 calculating:1 prof:1 approximating:3 question:2 quantity:1 strategy:22 fabra:1 usual:1 said:1 reversed:1 berlin:1 topic:1 reason:3 induction:2 assuming:1 index:2 modeled:1 ratio:7 minimizing:1 statement:2 stated:2 rise:1 marchetti:1 design:1 perform:2 finite:2 incorrectly:3 situation:1 team:3 frugal:9 arbitrary:1 inferred:1 introduced:1 pair:36 required:1 compensatory:1 distinction:1 established:2 barcelona:1 lund:1 challenge:1 max:1 memory:3 everyone:1 ranked:2 representing:1 satisficing:1 review:2 disagrees:1 loss:1 permutation:29 proportional:1 proven:1 versus:1 degree:1 schmitt:3 principle:1 editor:2 elsewhere:1 supported:1 regressionbased:1 side:1 fall:1 saul:1 bulletin:1 fifth:1 valid:1 computes:3 author:1 made:4 collection:3 far:2 employing:1 polynomially:1 approximate:3 b1:1 assumed:1 search:3 un:1 continuous:1 why:1 table:2 bottou:1 bdiff:2 constructing:1 main:1 profile:4 repeated:1 referred:2 rithm:1 ny:2 position:1 third:1 theorem:6 specific:2 showing:1 list:4 evidence:1 exists:3 demand:1 reedy:11 hitting:9 ordered:3 springer:1 newell:4 truth:1 satisfies:2 relies:1 acm:2 abc:1 ma:2 weston:1 conditional:1 bamberg:2 absence:1 specifically:1 diff:5 called:3 accepted:1 experimental:3 indicating:1 formally:2 select:1 latter:2 relevance:1 mum:1 |
1,997 | 2,814 | Cue Integration for Figure/Ground Labeling
Xiaofeng Ren, Charless C. Fowlkes and Jitendra Malik
Computer Science Division, University of California, Berkeley, CA 94720
{xren,fowlkes,malik}@cs.berkeley.edu
Abstract
We present a model of edge and region grouping using a conditional
random field built over a scale-invariant representation of images to integrate multiple cues. Our model includes potentials that capture low-level
similarity, mid-level curvilinear continuity and high-level object shape.
Maximum likelihood parameters for the model are learned from human
labeled groundtruth on a large collection of horse images using belief
propagation. Using held out test data, we quantify the information gained
by incorporating generic mid-level cues and high-level shape.
1 Introduction
Figure/ground organization, the binding of contours to surfaces, is a classical problem in
vision. In the 1920s, Edgar Rubin pointed to several generic properties, such as closure,
which governed the perception of figure/ground. However, it is clear that in the context of
natural scenes, such processing must be closely intertwined with many low- and mid-level
grouping cues as well as a priori object knowledge [10].
In this paper, we study a simplified task of figure/ground labeling in which the goal is
to label every pixel as belonging to either a figural object or background. Our goal is to
understand the role of different cues in this process, including low-level cues, such as edge
contrast and texture similarity; mid-level cues, such as curvilinear continuity; and highlevel cues, such as characteristic shape or texture of the object. We develop a conditional
random field model [7] over edges, regions and objects to integrate these cues. We train
the model from human-marked groundtruth labels and quantify the relative contributions
of each cue on a large collection of horse images[2].
In computer vision, the work of Geman and Geman [3] inspired a whole subfield of work
on Markov Random Fields in relation to segmentation and denoising. More recently, Conditional Random Fields (CRF) have been applied to low-level segmentation [6, 12, 4] and
have shown performance superior to traditional MRFs. However, most of the existing
MRF/CRF models focus on pixel-level labeling, requiring inferences over millions of pixels. Being tied to the pixel resolution, they are also unable to deal with scale change or
explicitly capture mid-level cues such as junctions. Our approach overcomes these difficulties by utilizing a scale-invariant representation of image contours and regions where
each variable in our model can correspond to hundreds of pixels. It is also quite straightforward to design potentials which capture complicated relationships between these mid-level
tokens in a transparent way.
Interest in combining object knowledge with segmentation has grown quickly over the
Z
Yt
Xe
Ys
(1)
(2)
(3)
(4)
Figure 1: A scale-invariant representation of images: Given the input (1), we estimate the
local probability of boundary P b based on gradients (2). We then build a piecewise linear
approximation of the edge map and complete it with Constrained Delaunay Triangulation
(CDT). The black edges in (3) are gradient edges detected in (2); the green edges are
potential completions generated by CDT. (4) We perform inference in a probabilistic model
built on top of this representation and extract marginal distributions on edges X, triangular
regions Y and object pose Z.
last few years [2, 16, 14]. Our probabilistic approach is similar in spirit to [14] however
we focus on learning parameters of a discriminative model and quantify our performance
on test data. Compared to previous techniques which rely heavily on top-down template
matching [2, 5], our approach has three major advantages: (1) We are able to use midlevel grouping cues including junctions and continuity. Our results show these cues make
quantitatively significant contributions. (2) We combine cues in a probabilistic framework
where the relative weighting of cues is learned from training data resulting in weights that
are easy to interpret. (3) The role of different cues can be easily studied by ?surgically
removing? them refitting the remaining parameters.
2 A conditional random field for figure/ground labeling
Figure 1 provides an overview of our technique for building a discrete, scale-independent
representation of image boundaries from a low-level detector. First we compute an edge
map using the boundary detector of [9] which utilizes both brightness and texture contrast
to estimate the probability of boundary, P b at each pixel. Next we use Canny?s hysteresis thresholding to trace the P b boundaries and then recursively split the boundaries using
angles, a scale-invariant measure, until each segment is approximately linear. Finally we
utilize the Constrained Delaunay Triangulation [13] to complete the piecewise linear approximations. CDT often completes gaps in object boundaries where local gradient information is absent. More details about this construction can be found in [11].
Let G be the resulting CDT graph. The edges and triangles in G are natural entities for
figure/ground labeling. We introduce the following random variables:
? Edges: Xe is 1 if edge e in the CDT is a true boundary and 0 otherwise.
? Regions: Yt is 1 if triangle t corresponds to figure and 0 otherwise.
? Pose: Z encodes the figural object?s pose in the scene. We use a very simple
Z which considers a discrete configuration space given by a grid of 25 possible
image locations. Z is easily augmented to include an indicator of object category
or aspect as well as location.
We now describe a conditional random field model on {X, Y, Z} used to integrate multiple
grouping cues. The model takes the form of a log-linear combination of features which are
functions of variables and image measurements. We consider Z a latent variable which is
marginalized out by assuming a uniform distribution over aspects and locations.
1
P (X, Y |Z, I, ?) =
e?E(X,Y |Z,I,?)
Z(I, ?)
~ ~?, ?, ~? , ?, ~? }
where the energy E of a configuration is linear in the parameters ? = {?, ?,
and given by
X
X
X
~ 2 (Ys , Yt |I) ? ~? ?
~ 1 (XV |I)
L1 (Xe |I) ? ?~ ?
L
M
E =??
??
X
e
hs,ti
M2 (Ys , Yt , Xe ) ? ~? ?
hs,ti
X
~ 1 (Yt |I) ? ?
H
X
t
V
H2 (Yt |Z, I) ? ~? ?
t
X
~ 3 (Xe |Z, I)
H
e
The table below gives a summary of each potential. The next section fills in details.
Similarity
Continuity
Closure
Familiarity
Edge energy along e
Brightness/Texture similarity between s and t
Collinearity and junction frequency at vertex V
Consistency of edge and adjoining regions
Similarity of region t to exemplar texture
Compatibility of region shape with pose
Compatibility of local edge shape with pose
L1 (Xe |I)
L2 (Ys , Yt |I)
M1 (XV |I)
M2 (Ys , Yt , Xe )
H1 (Yt |I)
H2 (Yt |Z, I)
H3 (Xe |Z, I)
3 Cues for figure/ground labeling
3.1 Low-level Cues: Similarity of Brightness and Texture
To capture the locally measured edge contrast, we assign a singleton edge potential whose energy is
L1 (Xe |I) = log(P be )Xe
where P be is the average P b recorded over the pixels corresponding to edge e.
L2
L1
Yt
Xe
Since the triangular regions have larger support than the local
edge detector, we also include a pairwise, region-based similarity cue, computed as
~ 2 (Ys , Yt |I) = (?B log(f (|Is ? It |)) + ?T log(g(?2 (hs , ht ))))1{Y
?~ ? L
Ys
s =Yt }
where f predicts the likelihood of s and t belonging to the same group given the difference
of average image brightness and g makes a similar prediction based on the ?2 difference
between histograms of vector quantized filter responses (referred to as textons [8]) which
describe the texture in the two regions.
3.2 Mid-level Cues: Curvilinear Continuity and Closure
There are two types of edges in the CDT graph, gradient-edges
(detected by P b) and completed-edges (filled in by the triangulation). Since true boundaries are more commonly marked
by a gradient, we keep track of these two types of edges separately when modeling junctions. To capture continuity and the
frequency of different junction types, we assign energy:
X
~? ? M
~ 1 (XV |I) =
?i,j 1{deg (V )=i,deg (V )=j}
g
M2
Yt
M1
c
i,j
+ ?C 1{degg (V )+degc (V )=2} log(h(?))
Xe
Ys
where XV = {Xe1 , Xe2 , . . .} is the set of edge variables incident on V , degg (V ) is the
number of gradient-edges at vertex V for which Xe = 1. Similarly degc (V ) is the number
of completed-edges that are ?turned on?. When the total degree of a vertex is 2, ?C weights
the continuity of the two edges. h is the output of a logistic function fit to |?| and the
probability of continuation. It is smooth and symmetric around ? = 0 and falls of as
? ? ?. If the angle between the two edges is close to 0, they form a good continuation,
f (?) is large, and they are more likely to both be turned on.
In order to assert the duality between segments and boundaries, we use a compatibility term
M2 (Ys , Yt , Xe ) = 1{Ys =Yt ,Xe =0} + 1{Ys 6=Yt ,Xe =1}
which simply counts when the label of s and t is consistent with that of e.
3.3 High-level Cues: Familiarity of Shape and Texture
We are interested in encoding high-level knowledge about object categories. In this paper
we experiment with a single object category, horses, but we believe our high-level cues will
scale to multiple objects in a natural way.
We compute texton histograms ht for each triangular region (as
in L1 ). From the set of training images, we use k-medoids
F
to find 10 representative histograms {hF
1 , . . . , h10 } for the
collection of segments labeled as figure and 10 histograms
G
{hG
l , . . . , hl0 } for the set of background segments. Each segment in a test image is compared to the set of exemplar histograms using the ?2 histogram difference. We use the energy
term
?
?
mini ?2 (ht , hF
i )
H1 (Yt |I) = log
Yt
mini ?2 (ht , hG
i )
to capture the cue of texture familiarity.
Z
H1
H3
H2
Yt
Xe
Ys
We describe the global shape of the object using a template T (x, y) generated by averaging the groundtruth object segmentation masks. This yields a silhouette with quite fuzzy
boundaries due to articulations and scale variation. Figure 3.3(a) shows the template extracted from our training data. Let O(Z, t) be the normalized overlap between template
centered at Z = (x0 , y0 ) with the triangular region corresponding to Yt . This is computed
as the integral of T (x, y) over the triangle t divided by the area of t. We then use energy
~ 2 (Yt |Z) = ?F log(O(Z, t))Yt + ?G log(1 ? O(Z, t))(1 ? Yt )
~? ? H
In the case of multiple objects or aspects of a single object, we use multiple templates and
augment Z with an indicator of the aspect Z = (x, y, a). In our experiments on the dataset
considered here, we found that the variability is too small (all horses facing left) to see a
significant impact on performance from adding multiple aspects.
Lastly, we would like to capture the spatial layout of articulated structures such as the horses
legs and head. To describe characteristic configuration of edges, we utilize the geometric
blur[1] descriptor applied to the output of the P b boundary detector. The geometric blur
centered at location x, GBx (y), is a linear operator applied to P b(x, y) whose value is
another image given by the ?convolution? of P b(x, y) with a spatially varying Gaussian.
Geometric blur is motivated by the search for a linear operator which will respond strongly
to a particular object feature and is invariant to some set of transformations of the image.
We use the geometric blur computed at the set of image edges (P b > 0.05) to build a library
of 64 prototypical ?shapemes? from the training data by vector quantization. For each edge
Xe which expresses a particular shapeme we would like to know whether Xe should be
(a)
(b)
(c)
(d)
(e)
Figure 2: Using a priori shape knowledge: (a) average horse template. (b) one shapeme,
capturing long horizontal curves. Shown here is the average shape in this shapeme cluster.
(c) on a horse, this shapeme occurs at horse back and stomach. Shown here is the density
of the shapeme M ON overlayed with a contour plot of the average mask. (d) another
shapeme, capturing parallel vertical lines. (e) on a horse, this shapeme occurs at legs.
?turned on?. This is estimated from training data by building spatial maps MiON (x, y) and
MiOF F (x, y) for each shapeme relative to the object center which record the frequency of
a true/false boundary expressing shapeme i. Figure 3.3(b-e) shows two example shapemes
and their corresponding M ON map. Let Se,i (x, y) be the indicator of the set of pixels on
edge e which express shapeme i. For an object in pose Z = (x0 , y0 ) we use the energy
X
X
X 1
~ 3 (Xe |Z, I) =
(?ON
~? ?
H
log(MiON (x ? x0 , y ? y0 ))Se,i (x, y)Xe +
|e|
e
e
i,x,y
X
?OF F
log(MiOF F (x ? x0 , y ? y0 ))Se,i (x, y)(1 ? Xe ))
i,x,y
4 Learning cue integration
We carry out approximate inference using loopy belief propagation [15] which appears to
converge quickly to a reasonable solution for the graphs and potentials in question.
To fit parameters of the model, we maximize the joint likelihood over X, Y, Z taking each
image as an iid sample. Since our model is log-linear in the parameters ?, partial derivatives always yield the difference between the empirical expectation of a feature given by
the training data and the expected value given the model parameters. For example, the
derivative with respect to the continuation parameter ?0 for a single training image/ground
truth labeling, (I, X, Y, Z) is:
?
? log P (X, Y |Z, I, ?)
??0
X ?
?
=
log Z(In , ?) ?
{?0 1{degg (V )+degc (V )=2} log(f (?))}
??0
??0
V
+
*
X
X
1{degg (V )+degc (V )=2} log(f (?))
=
1{degg (V )+degc (V )=2} log(f (?)) ?
V
V
where the expectation is taken with respect to P (X, Y |Z, I, ?).
Given this estimate, we optimize the parameters by gradient descent. We have also used
the difference of the energy and the Bethe free energy given by the beliefs as an estimate
of the log likelihood in order to support line-search in conjugate gradient or quasi-newton
routines. For our model, we find that gradient descent with momentum is efficient enough.
deg=0
weight=2.4607
deg=1
weight=0.8742
deg=2
weight=1.1458
deg=3
weight=0.0133
Figure 3: Learning about junctions: (a) deg=0, no boundary detected; the most common
case. (b) line endings. (c) continuations of contours, more common than line endings. (d)
T-junctions, very rare for the horse dataset. Compare with hand set potentials of Geman
and Geman [3].
5 Experiments
In our experiments we use 344 grayscale images of the horse dataset of Borenstein et al [2].
Half of the images are used for training and half for testing. Human-marked segmentations
are used1 for both training and evaluation.
Training: loopy belief propagation on a typical CDT graph converges in about 1 second.
The gradient descent learning described above converges within 1000 iterations. To understand the weights given by the learning procedure, Figure 3 shows some of the junction
types in M1 and their associated weights ?.
Testing: we evaluate the performance of our model on both edge and region labels. We
present the results using a precision-recall curve which shows the trade-off between false
positives and missed detections. For each edge e, we assign the marginal probability E[Xe ]
to all pixels (x, y) belonging to e. Then for each threshold r, pixels above r are matched
to human-marked boundaries H. The precision P = P (H(x, y) = 1|PE (x, y) > r) and
recall R = P (PE (x, y) > r|H(x, y) = 1) are recorded. Similarly, each pixel in a triangle
t is assigned the marginal probability E[Yt ] and the precision and recall of the ground-truth
figural pixels computed.
The evaluations are shown in Figure 4 for various combinations of cues. Figure 5 shows
our results on some of the test images.
6 Conclusion
We have introduced a conditional random field model on a triangulated representation of
images for figure/ground labeling. We have measured the contributions of mid- and highlevel cues by quantitative evaluations on held out test data. Our findings suggest that midlevel cues provide useful information, even in the presence of high-level shape cues. In
future work we plan to extend this model to multiple object categories.
References
[1] A. Berg and J. Malik. Geometric blur for template matching. In CVPR, 2001.
[2] E. Borenstein and S. Ullman. Class-specific, top-down segmentation. In Proc. 7th Europ. Conf.
Comput. Vision, volume 2, pages 109?124, 2002.
[3] S. Geman and D. Geman. Stochastic relaxation, gibbs distribution, and the bayesian retoration
of images. IEEE Trans. Pattern Analysis and Machine Intelligence, 6:721?41, Nov. 1984.
1
From the human segmentations on pixel-grid, we use two simple techniques to establish
groundtruth labels on the CDT edges Xe and triangles Yt . For Xe , we run a maximum-cardinality
bipartite matching between the human marked boundaries and the CDT edges. We label Xe = 1 if
75% of the pixels lying under the edge e are matched to human boundaries. For Yt , we label Yt = 1
if at least half of the pixels within the triangle are figural pixels in the human segmentation.
Regions
1
0.75
0.75
Precision
Precision
Boundaries
1
0.5
0.25
0
0
0.25
Pb [F=0.54]
Pb + M [F=0.56]
Pb + H [F=0.62]
Pb + M + H [F=0.66]
Ground Truth [F=0.80]
0.25
0.5
Recall
0.5
0.75
1
0
0
L+M [F=0.66]
L+H [F=0.82]
L+M+H [F=0.83]
Ground Truth [F=0.95]
0.25
0.5
Recall
0.75
1
Figure 4: Performance evaluation: (a) precision-recall curves for horse boundaries, models
with low-level cues only (P b), low- plus mid-level cues (P b+M ), low- plus high-level cues
(P b + H), and all three classes of cues combined (P b + M + H). The F-measure recorded
in the legend is the maximal harmonic mean of precision and recall and provides an overall
ranking. Using high-level cues greatly improves the boundary detection performance. Midlevel continuity cues are useful with or without high-level cues. (b) precision-recall for
regions. The poor performance of the baseline L + M model indicates the ambiguity of
figure/ground labeling at low-level despite successful boundary detection. High-level shape
knowledge is the key, consistent with evidence from psychophysics [10]. In both boundary
and region cases, the groundtruth labels on CDTs are nearly perfect, indicating that the
CDT graphs preserve most of the image structure.
[4] X. He, R. Zemel, and M. Carreira-Perpinan. Multiscale conditional random fields for image
labelling. In IEEE Conference on Computer Vision and Pattern Recognition, 2004.
[5] M. P. Kumar, P. H. S. Torr, and A. Zisserman. OBJ CUT. In CVPR, 2005.
[6] S. Kumar and M. Hebert. Discriminative random fields: A discriminative framework for contextual interaction in classification. In ICCV, 2003.
[7] John Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. 18th International Conf. on
Machine Learning, 2001.
[8] J. Malik, S. Belongie, J. Shi, and T. Leung. Textons, contours and regions: Cue integration in
image segmentation. In Proc. 7th Int?l. Conf. Computer Vision, pages 918?925, 1999.
[9] D. Martin, C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using brightness and texture. In Advances in Neural Information Processing Systems 15, 2002.
[10] M. A. Peterson and B. S. Gibson. Object recognition contributions to figure-ground organization. Perception and Psychophysics, 56:551?564, 1994.
[11] X. Ren, C. Fowlkes, and J. Malik. Mid-level cues improve boundary detection. Technical
Report UCB//CSD-05-1382, UC Berkeley, January 2005.
[12] N. Shental, A. Zomet, T. Hertz, and Y. Weiss. Pairwise clustering and graphical models. In
NIPS 2003, 2003.
[13] J. Shewchuk. Triangle: Engineering a 2d quality mesh generator and delaunay triangulator. In
First Workshop on Applied Computational Geometry, pages 124?133, 1996.
[14] Z.W. Tu, X.R. Chen, A.L Yuille, and S.C. Zhu. Image parsing: segmentation, detection, and
recognition. In ICCV, 2003.
[15] Y. Weiss. Correctness of local probability propagation in graphical models with loops. Neural
Computation, 2000.
[16] S. Yu, R. Gross, and J. Shi. Concurrent object segmentation and recognition with graph partitioning. In Advances in Neural Information Processing Systems 15, 2002.
(a)
(b)
(c)
(d)
Figure 5: Sample results. (a) the input grayscale images. (b) the low-level boundary map
output by P b. (c) the edge marginals under our full model and (d) the image masked by
the output region marginals. A red cross in (d) indicates the most probably object center.
By combining relatively simple low-/mid-/high-level cues in a learning framework, We are
able to find and segment horses under varying conditions with only a simple object mode.
The boundary maps show the model is capable of suppressing strong gradients in the scene
background while boosting low-contrast edges between figure and ground. (Row 3) shows
an example of an unusual pose. In (Row 5) we predict a correct off-center object location
and (Row 8) demonstrates grouping together figure with non-homogeneous appearance.
| 2814 |@word h:3 collinearity:1 closure:3 brightness:5 recursively:1 carry:1 configuration:3 suppressing:1 existing:1 contextual:1 must:1 parsing:1 john:1 mesh:1 blur:5 shape:11 plot:1 cue:39 half:3 intelligence:1 mccallum:1 record:1 provides:2 quantized:1 boosting:1 location:5 along:1 combine:1 introduce:1 x0:4 pairwise:2 mask:2 expected:1 inspired:1 xe2:1 cardinality:1 matched:2 xe1:1 fuzzy:1 finding:1 transformation:1 assert:1 berkeley:3 every:1 quantitative:1 ti:2 demonstrates:1 partitioning:1 segmenting:1 positive:1 engineering:1 local:5 xv:4 despite:1 encoding:1 approximately:1 black:1 plus:2 studied:1 testing:2 procedure:1 area:1 gibson:1 empirical:1 matching:3 suggest:1 close:1 operator:2 context:1 optimize:1 map:6 yt:28 center:3 shi:2 straightforward:1 layout:1 resolution:1 m2:4 utilizing:1 fill:1 variation:1 construction:1 heavily:1 homogeneous:1 shewchuk:1 recognition:4 cut:1 geman:6 labeled:2 predicts:1 role:2 capture:7 region:19 trade:1 gross:1 surgically:1 segment:6 yuille:1 division:1 bipartite:1 triangle:7 easily:2 joint:1 various:1 grown:1 train:1 articulated:1 describe:4 detected:3 zemel:1 labeling:10 horse:13 quite:2 whose:2 larger:1 cvpr:2 otherwise:2 triangular:4 highlevel:2 advantage:1 sequence:1 interaction:1 maximal:1 canny:1 tu:1 turned:3 combining:2 loop:1 figural:4 curvilinear:3 cluster:1 perfect:1 converges:2 object:26 develop:1 completion:1 pose:7 andrew:1 measured:2 exemplar:2 h3:2 strong:1 c:1 europ:1 quantify:3 triangulated:1 closely:1 correct:1 filter:1 stochastic:1 centered:2 human:8 assign:3 transparent:1 lying:1 around:1 considered:1 ground:15 predict:1 major:1 proc:3 label:8 concurrent:1 correctness:1 cdt:10 gaussian:1 always:1 varying:2 hl0:1 focus:2 likelihood:4 indicates:2 greatly:1 contrast:4 baseline:1 detect:1 inference:3 mrfs:1 leung:1 relation:1 h10:1 quasi:1 interested:1 pixel:16 compatibility:3 overall:1 classification:1 augment:1 priori:2 plan:1 constrained:2 integration:3 spatial:2 psychophysics:2 marginal:3 field:10 uc:1 yu:1 nearly:1 future:1 report:1 piecewise:2 quantitatively:1 few:1 preserve:1 geometry:1 overlayed:1 detection:5 organization:2 interest:1 evaluation:4 adjoining:1 hg:2 held:2 edge:39 integral:1 partial:1 capable:1 filled:1 modeling:1 loopy:2 vertex:3 rare:1 hundred:1 uniform:1 masked:1 successful:1 too:1 combined:1 density:1 international:1 refitting:1 probabilistic:4 off:2 together:1 quickly:2 ambiguity:1 recorded:3 conf:3 derivative:2 ullman:1 potential:7 singleton:1 includes:1 int:1 hysteresis:1 jitendra:1 textons:2 explicitly:1 ranking:1 h1:3 red:1 hf:2 complicated:1 parallel:1 contribution:4 descriptor:1 characteristic:2 correspond:1 yield:2 bayesian:1 iid:1 ren:2 edgar:1 detector:4 energy:9 frequency:3 associated:1 dataset:3 stomach:1 recall:8 knowledge:5 improves:1 segmentation:11 routine:1 back:1 appears:1 response:1 zisserman:1 wei:2 strongly:1 lastly:1 until:1 hand:1 horizontal:1 multiscale:1 propagation:4 continuity:8 logistic:1 mode:1 quality:1 believe:1 building:2 requiring:1 true:3 normalized:1 assigned:1 spatially:1 symmetric:1 deal:1 crf:2 complete:2 l1:5 image:28 harmonic:1 recently:1 charles:1 superior:1 common:2 overview:1 volume:1 million:1 extend:1 he:1 m1:3 interpret:1 marginals:2 significant:2 measurement:1 expressing:1 gibbs:1 grid:2 consistency:1 similarly:2 pointed:1 similarity:7 surface:1 delaunay:3 triangulation:3 xe:26 converge:1 maximize:1 fernando:1 multiple:7 full:1 smooth:1 technical:1 cross:1 long:1 divided:1 y:12 impact:1 prediction:1 mrf:1 vision:5 expectation:2 histogram:6 iteration:1 texton:1 background:3 separately:1 completes:1 borenstein:2 probably:1 legend:1 lafferty:1 spirit:1 obj:1 presence:1 split:1 easy:1 enough:1 fit:2 absent:1 whether:1 motivated:1 useful:2 clear:1 se:3 mid:11 locally:1 category:4 continuation:4 estimated:1 track:1 intertwined:1 discrete:2 shental:1 express:2 group:1 key:1 threshold:1 pb:4 ht:4 utilize:2 graph:6 relaxation:1 year:1 run:1 angle:2 respond:1 reasonable:1 groundtruth:5 utilizes:1 missed:1 capturing:2 scene:3 encodes:1 aspect:5 kumar:2 martin:1 relatively:1 combination:2 poor:1 belonging:3 conjugate:1 hertz:1 y0:4 leg:2 invariant:5 medoids:1 iccv:2 taken:1 count:1 know:1 unusual:1 junction:8 generic:2 fowlkes:4 top:3 remaining:1 include:2 clustering:1 completed:2 graphical:2 marginalized:1 newton:1 build:2 establish:1 classical:1 malik:6 question:1 occurs:2 traditional:1 gradient:11 unable:1 entity:1 considers:1 assuming:1 relationship:1 mini:2 trace:1 design:1 perform:1 vertical:1 convolution:1 markov:1 descent:3 january:1 variability:1 head:1 introduced:1 california:1 learned:2 nip:1 trans:1 able:2 below:1 perception:2 pattern:2 articulation:1 built:2 including:2 green:1 belief:4 overlap:1 natural:4 difficulty:1 rely:1 indicator:3 zhu:1 improve:1 library:1 extract:1 geometric:5 l2:2 degg:5 relative:3 subfield:1 prototypical:1 facing:1 generator:1 h2:3 integrate:3 incident:1 degree:1 consistent:2 rubin:1 thresholding:1 row:3 summary:1 token:1 last:1 free:1 hebert:1 understand:2 fall:1 template:7 taking:1 peterson:1 boundary:26 curve:3 ending:2 contour:5 collection:3 commonly:1 simplified:1 approximate:1 nov:1 silhouette:1 overcomes:1 keep:1 deg:7 global:1 belongie:1 discriminative:3 grayscale:2 search:2 latent:1 table:1 bethe:1 ca:1 csd:1 whole:1 augmented:1 referred:1 representative:1 precision:8 momentum:1 pereira:1 comput:1 governed:1 tied:1 pe:2 perpinan:1 weighting:1 down:2 removing:1 xiaofeng:1 familiarity:3 specific:1 evidence:1 grouping:5 incorporating:1 workshop:1 quantization:1 false:2 adding:1 gained:1 texture:10 labelling:1 gap:1 chen:1 simply:1 likely:1 appearance:1 binding:1 midlevel:3 corresponds:1 truth:4 extracted:1 conditional:8 goal:2 marked:5 change:1 carreira:1 typical:1 torr:1 zomet:1 averaging:1 denoising:1 total:1 duality:1 ucb:1 indicating:1 berg:1 support:2 evaluate:1 |
1,998 | 2,815 | Bayesian models of human action understanding
Chris L. Baker, Joshua B. Tenenbaum & Rebecca R. Saxe
{clbaker,jbt,saxe}@mit.edu
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Abstract
We present a Bayesian framework for explaining how people reason
about and predict the actions of an intentional agent, based on observing its behavior. Action-understanding is cast as a problem of inverting
a probabilistic generative model, which assumes that agents tend to act
rationally in order to achieve their goals given the constraints of their environment. Working in a simple sprite-world domain, we show how this
model can be used to infer the goal of an agent and predict how the agent
will act in novel situations or when environmental constraints change.
The model provides a qualitative account of several kinds of inferences
that preverbal infants have been shown to perform, and also fits quantitative predictions that adult observers make in a new experiment.
1 Introduction
A woman is walking down the street. Suddenly, she turns 180 degrees and begins running
in the opposite direction. Why? Did she suddenly realize she was going the wrong way,
or change her mind about where she should be headed? Did she remember something
important left behind? Did she see someone she is trying to avoid? These explanations for
the woman?s behavior derive from taking the intentional stance: treating her as a rational
agent whose behavior is governed by beliefs, desires or other mental states that refer to
objects, events, or states of the world [5].
Both adults and infants have been shown to make robust and rapid intentional inferences
about agents? behavior, even from highly impoverished stimuli. In ?sprite-world? displays,
simple shapes (e.g., circles) move in ways that convey a strong sense of agency to adults,
and that lead to the formation of expectations consistent with goal-directed reasoning in infants [9, 8, 14]. The importance of the intentional stance in interpreting everyday situations,
together with its robust engagement even in preverbal infants and with highly simplified
perceptual stimuli, suggest that it is a core capacity of human cognition.
In this paper we describe a computational framework for modeling intentional reasoning in
adults and infants. Interpreting an agent?s behavior via the intentional stance poses a highly
underconstrained inference problem: there are typically many configurations of beliefs and
desires consistent with any sequence of behavior. We define a probabilistic generative
model of an agent?s behavior, in which behavior is dependent on hidden variables representing beliefs and desires. We then model intentional reasoning as a Bayesian inference
about these hidden variables given observed behavior sequences.
It is often said that ?vision is inverse graphics? ? the inversion of a causal physical process
of scene formation. By analogy, our analysis of intentional reasoning might be called
?inverse planning?, where the observer infers an agent?s intentions, given observations of
the agent?s behavior, by inverting a model of how intentions cause behavior. The intentional
stance assumes that an agent?s actions depend causally on mental states via the principle
of rationality: rational agents tend to act to achieve their desires as optimally as possible,
given their beliefs. To achieve their desired goals, agents must typically not only select
single actions but must construct plans, or sequences of intended actions. The standards
of ?optimal plan? may vary with agent or circumstance: possibilities include achieving
goals ?as quickly as possible?, ?as cheaply ...?, ?as reliably ...?, and so on. We assume a
soft, probabilistic version of the rationality principle, allowing that agents can often only
approximate the optimal sequence of actions, and occasionally act in unexpected ways.
The paper is organized as follows. We first review several theoretical accounts of intentional reasoning from the cognitive science and artificial intelligence literatures, along
with some motivating empirical findings. We then present our computational framework,
grounding the discussion in a specific sprite-world domain. Lastly, we present results of
our model on two sprite-world examples inspired by previous experiments in developmental psychology, and results of the model on our own experiments.
2 Empirical studies of intentional reasoning in infants and adults
2.1 Inferring an invariant goal
The ability to predict how an agent?s behavior will adapt when environmental circumstances change, such as when an obstacle is inserted or removed, is a critical aspect of
intentional reasoning. Gergely, Csibra and colleagues [8, 4] showed that preverbal infants
can infer an agent?s goal that appears to be invariant across different circumstances, and
can predict the agent?s future behavior by effectively assuming that it will act to achieve its
goal in an efficient way, subject to the constraints of its environment. Their experiments
used a looking-time (violation-of-expectation) paradigm with sprite-world stimuli. Infant
participants were assigned to one of two groups. In the ?obstacle? condition, infants were
habituated to a sprite (a colored circle) moving (?jumping?) in a curved path over an obstacle to reach another object. The size of the obstacle varied across trials, but the sprite
always followed a near-shortest path over the obstacle to reach the other object. In the ?no
obstacle? group, infants were habituated to the sprite following the same curved ?jumping?
trajectory to the other object, but without an obstacle blocking its path. Both groups were
then presented with the same test conditions, in which the obstacle was placed out of the
sprite?s way, and the sprite followed either the old, curved path or a new direct path to the
other object. Infants from the ?obstacle? group looked longer at the sprite following the
unobstructed curved path, which (in the test condition) was now far from the most efficient
route to the other object. Infants in the ?no obstacle? group looked equally at both test stimuli. That is, infants in the ?obstacle? condition appeared to interpret the sprite as moving in
a rational goal-directed fashion, with the other object as its goal. They expected the sprite
to plan a path to the goal that was maximally efficient, subject to environmental constraints
when present. Infants in the ?no obstacle? group appeared more uncertain about whether
the sprite?s movement was actually goal-directed or about what its goal was: was it simply
to reach the other object, or something more complex, such as reaching the object via a
particular curved path?
2.2 Inferring goals of varying complexity: rational means-ends analysis
Gergely et al. [6], expanding on work by Meltzoff [11], showed that infants can infer goals
of varying complexity, again by interpreting agents? behaviors as rational responses to environmental constraints. In two conditions, infants saw an adult demonstrate an unfamiliar
complex action: illuminating a light-box by pressing its top with her forehead. In the
?hands occupied? condition, the demonstrator pretended to be cold and wrapped a blanket
around herself, so that she was incapable of using a more typical means (i.e., her hands)
to achieve the same goal. In the ?hands free? condition the demonstrator had no such constraint. Most infants in the ?hands free? condition spontaneously performed the head-press
action when shown the light-box one week later, but only a few infants in the ?hands occupied? condition did so; the others illuminated the light-box simply by pressing it with their
hands. Thus infants appear to assume that rational agents will take the most efficient path
to their goal, and that if an agent appears to systematically employ an inefficient means, it
is likely because the agent has adopted a more complex goal that includes not only the end
state but also the means by which that end should be achieved.
2.3 Inductive inference in intentional reasoning
Gergely and colleagues interpret their findings as if infants are reasoning about intentional
action in an almost logical fashion, deducing the goal of an agent from its observed behavior, the rationality principle, and other implicit premises. However, from a computational
point of view, it is surely oversimplified to think that the intentional stance could be implemented in a deductive system. There are too many sources of uncertainty and the inference
problem is far too underconstrained for a logical approach to be successful. In contrast,
our model posits that intentional reasoning is probabilistic. People?s inferences about an
agent?s goal should be graded, reflecting a tradeoff between the prior probability of a candidate goal and its likelihood in light of the agent?s observed behavior. Inferences should
become more confident as more of the agent?s behavior is observed.
To test whether human intentional reasoning is consistent with a probabilistic account,
it is necessary to collect data in greater quantities and with greater precision than infant
studies allow. Hence we designed our own sprite-world experimental paradigm, to collect
richer quantitative judgments from adult observers. Many experiments are possible in this
paradigm, but here we describe just one study of statistical effects on goal inference.
(b) Test paths
1
2
3
# stimuli
4
(c)
1
2
Cond: simple
Cond: complex
7
rating
(both groups)
simple
Training paths
complex
Training cond
(a)
5
3
1
1
2
3
# stimuli
4
Figure 1: (a) Training stimuli in complex and simple goal conditions. (b) Test stimuli 1 and 2. Test
stimuli was the same for each group. (c) Mean of subjects? ratings with standard error bars (n=16).
Sixteen observers were told that they would be watching a series of animations of a mouse
running in a simple maze (a box with a single internal wall). The displays were shown
from an overhead perspective, with an animated schematic trace of the mouse?s path as it
ran through the box. In each display, the mouse was placed in a different starting location
and ran to recover a piece of cheese at a fixed, previously learned location. Observers were
told that the mouse had learned to follow a more-or-less direct path to the cheese, regardless
of its starting location. Subjects saw two conditions in counterbalanced order. In one condition (?simple goal?), observers saw four displays consistent with this prior knowledge.
In another condition (?complex goal?), observers saw movements suggestive of a more
complex, path-dependent goal for the mouse: it first ran directly to a particular location in
the middle of the box (the ?via-point?), and only then ran to the cheese. Fig. 1(a) shows
the mouse?s four trajectories in each of these conditions. Note that the first trajectory was
the same in both conditions, while the next three were different. Also, all four trajectories in both conditions passed through the same hypothetical via-point in the middle of the
box, which was not marked in any conspicuous way. Hence both the simple goal (?get to
the cheese?) and complex goal (?get to the cheese via point X?) were logically possible
interpretations in both conditions.
Observers? interpretations were assessed after viewing each of the four trajectories, by
showing them diagrams of two test paths (Fig. 1(b)) running from a novel starting location
to the cheese. They were asked to rate the probability of the mouse taking one or the other
test path using a 1-7 scale: 1 = definitely path 1, 7 = definitely path 2, with intermediate
values expressing intermediate degrees of confidence. Observers in the simple-goal condition always leaned towards path 1, the direct route that was consistent with the given prior
knowledge. Observers in the complex-goal condition initially leaned just as much towards
path 1, but after seeing additional trajectories they became increasingly confident that the
mouse would follow path 2 (Fig. 1(c)). Importantly, the latter group increased its average
confidence in path 2 with each subsequent trajectory viewed, consistent with the notion that
goal inference results from something like a Bayesian integration process: prior probability
favors the simple goal, but successive observations are more likely under the complex goal.
3 Previous models of intentional reasoning
The above phenomena highlight two capacities than any model of intentional reasoning
should capture. First, representations of agents? mental states should include at least primitive planning capacities, with a constrained space of candidate goals and subgoals (or
intended paths) that can refer to objects or locations in space, and the tendency to choose
action sequences that achieve goals as efficiently as possible. Second, inferences about
agents? goals should be probabilistic, and be sensitive to both prior knowledge about likely
goals as well as statistical evidence for more complex or less likely goals that better account
for observed actions.
These two components are clearly not sufficient for a complete account of human intentional reasoning, but most previous accounts do not include even these capacities. Gergely,
Csibra and colleagues [7] have proposed an informal (noncomputational) model in which
agents are essentially treated as rational planners, but inferences about agents? goals are
purely deductive, without a role for probabilistic expectations or gradations of confidence.
A more statistically sophisticated computational framework for inferring goals from behavior has been proposed by [13], but this approach does not incorporate planning capacities.
In this framework, the observer learns to represent an agent?s policies, conditional on the
agent?s goals. Within a static environment, this knowledge allows an observer to infer the
goal of an agent?s actions, predict subsequent actions, and perform imitation, but it does
not support generalization to new environments where the agent?s policy must adapt in response. Further, because generalization is not based on strong prior knowledge such as
the principle of rationality, many observations are needed for good performance. Likewise,
probabilistic approaches to plan recognition in AI (e.g., [3, 10]) typically represent plans
in terms of policies (state-action pairs) that do not generalize when the structure of the
environment changes in some unexpected way, and that require much data to learn from
observations of behavior.
Perhaps closest to how people reason with the intentional stance are methods for inverse
reinforcement learning (IRL) [12], or methods for learning an agent?s utility function [2].
Both approaches assume a rational agent who maximizes expected utility, and attempt to
infer the agent?s utility function from observations of its behavior. However, the utility
functions that people attribute to intentional agents are typically much more structured
and constrained than in conventional IRL. Goals are typically defined as relations towards
objects or other agents, and may include subgoals, preferred paths, or other elements. In the
next section we describe a Bayesian framework for modeling intentional reasoning that is
similar in spirit to IRL, but more focused on the kinds of goal structures that are cognitively
natural to human adults and infants.
4 The Bayesian framework
We propose to model intentional reasoning by combining the inferential power of statistical
approaches to action understanding [12, 2, 13] with simple versions of the representational
structures that psychologists and philosophers [5, 7] have argued are essential in theory
of mind. This section first presents our general approach, and then presents a specific
mathematical model for the ?mouse? sprite-world introduced above.
Most generally, we assume a world that can be represented in terms of entities, attributes,
and relations. Some attributes and relations are dynamic, indexed by a time dimension.
Some entities are agents, who can perform actions at any time t with the potential to change
the world state at time t+1. We distinguish between environmental state, denoted W , and
agent states, denoted S. For simplicity, we will assume that there is exactly one intentional
agent in the world, and that the agent?s actions can only affect its own state s ? S. Let s0:T
be a sequence of T +1 agent states. Typically, observations of multiple state sequences of
the agent are available, and in general each may occur in a separate environment. Let s1:N
0:T
be a set of N state sequences, and let w1:N be a set of N corresponding environments. Let
As be the set of actions available to the agent from state s, and let C(a) be the cost to the
agent of action a ? As . Let P (st+1 |at , st , w) be the distribution over the agent?s next state
st+1 , given the current state st , an action at ? Ast , and the environmental state w.
The agent?s actions are assumed to depend on mental states such as beliefs and desires. In
our context, beliefs correspond to knowledge about the environmental state. Desires may
be simple or complex. A simple desire is an end goal: a world state or class of states that
the agent will act to bring about. There are many possibilities for more complex goals,
such as achieving a certain end by means of a certain route, achieving a certain sequence
of states in some order, and so on. We specify a particular goal space G of simple and
complex goals for sprite-worlds in the next subsection. The agent draws goals g ? G from
a prior distribution P (g|w1:N ), which constrains goals to be feasible in the environments
w1:N from which observations of the agent?s behavior are available.
Given the agent?s goal g and an environment w, we can define a value Vg,w (s) for each state
s. The value function can be defined in various ways depending on the domain, task, and
agent type. We specify a particular value function in the next subsection that reflects the
goal structure of our sprite-world agent. The agent is assumed to choose actions according
to a probabilistic policy, with
P a preference for actions with greater expected increases in
value. Let Qg,w (s, a) = s? P (s? |a, s, w)Vg,w (s? ) ? C(a) be the expected value of the
state resulting from action a, minus the cost of the action. The agent?s policy is
P (at |st , g, w) ? exp(?Qg,w (st , at )).
(1)
The parameter ? controls how likely the agent is to select the most valuable action. This
policy embodies a ?soft? principle of rationality, which allows for inevitable sources of
suboptimal planning, or unexplained deviations from the direct path. A graphical model
illustrating the relationship between the environmental state, and the agent?s goals, actions,
and states is shown in Fig. 2.
The observer?s task is to infer g from the agent?s behavior. We assume that state sequences
are independent given the environment and the goal. The observer infers g from s1:N
0:T via
Bayes? rule, conditional on w1:N :
QN
1:N
P (g|s1:N
) ? P (g|w1:N ) i=1 P (si0:T |g, wi ).
(2)
0:T , w
We assume that state transition probabilities and action probabilities are conditionally independent given the agent?s goal g, the agent?s current state st , and the environment w.
The likelihood of a state sequence s0:T given a goal g and an environment w is computed
by marginalizing over possible actions generating state transitions:
QT ?1 P
P (s0:T |g, w) = t=0
(3)
at ?As P (st+1 |at , st , w)P (at |st , g, w).
t
W
...
...
G
At
At+1
St
St+1
...
...
Figure 2: Two time-slice dynamic Bayes net representation
of our model, where W is the environmental state, G is
the agent?s goal, St is the agent?s state at time t, and At
is the agent?s action at time t. Beliefs, desires, and actions
intuitively map onto W , G and A, respectively.
4.1 Modeling sprite-world inferences
Several additional assumptions are necessary to apply the above framework to any specific
domain, such as the sprite-worlds discussed in ?2. The size of the grid, the location of
obstacles, and likely goal points (such as the location of the cheese in our experimental
stimuli) are represented by W , and assumed to be known to both the agent and the observer.
The agent?s state space S consists of valid locations in the grid. All state sequences are
assumed to be of the same length. The action space As consists of moves in all compass
directions {N, S, E, W, N E, N W, SE, SW }, except where blocked by an obstacle, and
action costs are Euclidean. The agent can also choose to remain still with cost 1. We assume
P (st+1 |at , st , w) takes the agent to the desired adjacent grid point deterministically.
The set of possible goals G includes both simple and complex goals. Simple goals will just
be specific end states in S. While many kinds of complex goals are possible, we assume
here that a complex goal is just the combination of a desired end state with a desired means
to achieving that end. In our sprite-worlds, we identify ?desired means? with a constraint
that the agent must pass through an additional specified location enroute, such as the viapoint in the experiment from ?2.3. Because the number of complex goals defined in this
way is much larger than the number of simple goals, the likelihood of each complex goal
is small relative to the likelihood of individual simple goals. In addition, although pathdependent goals are possible, they should not be likely a priori. We thus set the prior
P (g|w1:N ) to favor simple goals by a factor of ?. For simplicity, we assume that the agent
draws just a single invariant goal g ? G from P (g|w1:N ), and we assume that this prior
distribution is known to the observer. More generally, an agent?s goals may vary across
different environments, and the prior P (g|w1:N ) may have to be learned.
We define the value of a state Vg,w (s) as the expected total cost to the agent of achieving g
while following the policy given in Eq. 1. We assume the desired end-state is absorbing and
cost-free, which implies that the agent attempts the stochastic shortest path (with respect
to its probabilistic policy) [1]. If g is a complex goal, Vg,w (s) is based on the stochastic
shortest path through the specified via-point. The agent?s value function is computed using
the value iteration algorithm [1] with respect to the policy given in Eq. 1.
Finally, to compare our model?s predictions with behavioral data from human observers,
we must specify how to compute the probability of novel trajectories s?0:T in a new environment w? , such as the test stimuli in Fig. 1, conditioned on an observed sequence s0:T in
environment w. This is just an average over the predictions for each possible goal g:
P
P (s?0:T |s0:T , w, w? ) = g?G P (s?0:T |g, w? )P (g|s0:T , w, w? ).
(4)
5 Sprite-world simulations
5.1 Inferring an invariant goal
As a starting point for testing our model, we return to the experiments of Gergely et al. [8,
4, 7], reviewed in ?2.1. Our input to the model, shown in Fig. 3(a,b), differs slightly from
the original stimuli used in [8], but the relevant details of interest are spared: goal-directed
action in the presence of constraints. Our model predictions, shown in Fig. 3(c), capture
the qualitative results of these experiments, showing a large contrast between the straight
path and the curved path in the condition with an obstacle, and a relatively small contrast
in the condition with no obstacle. In the ?no obstacle? condition, our model infers that the
agent has a more complex goal, constrained by a via-point. This significantly increases the
probability of the curved test path, to the point where the difference between the probability
of observing curved and straight paths is negligible.
(b)
(c)
Test paths
obst
?log(P(Test|Cond))
Training paths
no obst
Training cond
(a)
1
2
1
2
Test: straight
Test: curved
20
15
10
5
0
Cond: obst
Cond: no obst
Figure 3: Inferring an invariant goal. (a) Training input in obstacle and no obstacle conditions. (b)
Test input is the same in each condition. (c) Model predictions: negative log likelihoods of test paths
1 and 2 given data from training condition. In the obstacle condition, a large dissociation is seen
between path 1 and path 2, with path 1 being much more likely. In the no obstacle condition, there is
not a large preference for either path 1 or path 2, qualitatively matching Gergely et al.?s results [8].
5.2 Inferring goals of varying complexity: rational means-ends analysis
Our next example is inspired by the studies of Gergely et al. [6] described in ?2.2. In
our sprite-world version of the experiment, we varied the amount of evidence for a simple
versus a complex goal, by inputting the same three trajectories with and without an obstacle
present (Fig. 4(a)). In the ?obstacle? condition, the trajectories were all approximately
shortest paths to the goal, because the agent was forced to take indirect paths around the
obstacle. In the ?no obstacle? condition, no such constraint was present to explain the
curved paths. Thus a more complex goal is inferred, with a path constrained to pass through
a via-point. Given a choice of test paths, shown in Fig. 4(b), the model shows a doubledissociation between the probability of the direct path and the curved path through the
putative via-point, given each training condition (Fig. 4(c)), similar to the results in [6].
(c)
obst
Test paths
1
2
3
1
2
?log(P(Test|Cond))
(b)
Training paths
no obst
Training cond
(a)
Test: straight
Test: curved
20
15
10
5
0
Cond: obst Cond: no obst
Figure 4: Inferring goals of varying complexity. (a) Training input in obstacle and no obstacle conditions. (b) Test input in each condition. (c) Model predictions: a double dissociation between
probability of test paths 1 and 2 in the two conditions. This reflects a preference for the straight path
in the first condition, where there is an obstacle to explain the agent?s deflections in the training input,
and a preference for the curved path in the second condition, where a complex goal is inferred.
5.3 Inductive inference in intentional reasoning
Lastly, we present the results of our model on our own behavioral experiment, first described in ?2.3 and shown in Fig. 1. These data demonstrated the statistical nature of people?s intentional inferences. Fig. 5 compares people?s judgments of the probability that the
agent takes a particular test path with our model?s predictions. To place model predictions
and human judgments on a comparable scale, we fit a sigmoidal psychometric transformation to the computed log posterior odds for the curved test path versus the straight path. The
Bayesian model captures the graded shift in people?s expectations in the ?complex goal?
condition, as evidence accumulates that the agent always seeks to pass through an arbitrary
via-point enroute to the end state.
Cond: simple
Cond: complex
rating
7
Figure 5: Experimental results: model fit for behavioral data.
Mean ratings are plotted as hollow circles. Error bars give standard
error. The log posterior odds from the model were fit to subjects?
ratings using a scaled sigmoid function with range (1, 7). The sigmoid function includes bias and gain parameters, which were fit to
the human data by minimizing the sum-squared error between the
model predictions and mean subject ratings.
5
3
1
1
2
3
4
# stimuli
6 Conclusion
We presented a Bayesian framework to explain several core aspects of intentional reasoning: inferring the goal of an agent based on observations of its behavior, and predicting
how the agent will act when constraints or initial conditions for action change. Our model
captured basic qualitative inferences that even preverbal infants have been shown to perform, as well as more subtle quantitative inferences that adult observers made in a novel
experiment. Two future challenges for our computational framework are: representing and
learning multiple agent types (e.g. rational, irrational, random, etc.), and representing and
learning hierarchically structured goal spaces that vary across environments, situations and
even domains. These extensions will allow us to further test the power of our computational
framework, and will support its application to the wide range of intentional inferences that
people constantly make in their everyday lives.
Acknowledgments: We thank Whitman Richards, Konrad K?ording, Kobi Gal, Vikash Mansinghka, Charles Kemp, and Pat Shafto for helpful comments and discussions.
References
[1] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, Belmont,
MA, 2nd edition, 2001.
[2] U. Chajewska, D. Koller, and D. Ormoneit. Learning an agent?s utility function by observing
behavior. In Proc. of the 18th Intl. Conf. on Machine Learning (ICML), pages 35?42, 2001.
[3] E. Charniak and R. Goldman. A probabilistic model of plan recognition. In Proc. AAAI, 1991.
[4] G. Csibra, G. Gergely, S. Bir?o, O. Ko?os, and M. Brockbank. Goal attribution without agency
cues: the perception of ?pure reason? in infancy. Cognition, 72:237?267, 1999.
[5] D. C. Dennett. The Intentional Stance. Cambridge, MA: MIT Press, 1987.
[6] G. Gergely, H. Bekkering, and I. Kir?aly. Rational imitation in preverbal infants. Nature,
415:755, 2002.
[7] G. Gergely and G. Csibra. Teleological reasoning in infancy: the na??ve theory of rational action.
Trends in Cognitive Sciences, 7(7):287?292, 2003.
[8] G. Gergely, Z. N?adasdy, G. Csibra, and S. Bir?o. Taking the intentional stance at 12 months of
age. Cognition, 56:165?193, 1995.
[9] F. Heider and M. A. Simmel. An experimental study of apparent behavior. American Journal
of Psychology, 57:243?249, 1944.
[10] L. Liao, D. Fox, and H. Kautz. Learning and inferring transportation routines. In Proc. AAAI,
pages 348?353, 2004.
[11] A. N. Meltzoff. Infant imitation after a 1-week delay: Long-term memory for novel acts and
multiple stimuli. Developmental Psychology, 24:470?476, 1988.
[12] A. Y. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Proc. of the 17th
Intl. Conf. on Machine Learning (ICML), pages 663?670, 2000.
[13] R. P. N. Rao, A. P. Shon, and A. N. Meltzoff. A Bayesian model of imitation in infants and
robots. In Imitation and Social Learning in Robots, Humans, and Animals. (in press).
[14] B. J. Scholl and P. D. Tremoulet. Perceptual causality and animacy. Trends in Cognitive Sciences, 4(8):299?309, 2000.
| 2815 |@word trial:1 illustrating:1 middle:2 version:3 inversion:1 nd:1 simulation:1 seek:1 minus:1 initial:1 configuration:1 series:1 charniak:1 preverbal:5 ording:1 animated:1 current:2 must:5 realize:1 belmont:1 subsequent:2 shape:1 treating:1 designed:1 infant:26 generative:2 intelligence:1 cue:1 core:2 colored:1 mental:4 provides:1 location:10 successive:1 preference:4 sigmoidal:1 mathematical:1 along:1 direct:5 become:1 qualitative:3 consists:2 overhead:1 headed:1 behavioral:3 expected:5 rapid:1 behavior:25 planning:4 brain:1 inspired:2 oversimplified:1 goldman:1 begin:1 baker:1 maximizes:1 what:1 kind:3 inputting:1 finding:2 transformation:1 gal:1 quantitative:3 remember:1 hypothetical:1 act:8 exactly:1 wrong:1 scaled:1 control:2 appear:1 causally:1 bertsekas:1 negligible:1 accumulates:1 path:54 approximately:1 might:1 collect:2 someone:1 range:2 statistically:1 directed:4 acknowledgment:1 spontaneously:1 testing:1 differs:1 cold:1 empirical:2 significantly:1 inferential:1 matching:1 intention:2 confidence:3 seeing:1 suggest:1 get:2 onto:1 ast:1 context:1 gradation:1 conventional:1 map:1 demonstrated:1 chajewska:1 transportation:1 primitive:1 regardless:1 starting:4 attribution:1 focused:1 simplicity:2 pure:1 rule:1 importantly:1 notion:1 rationality:5 programming:1 element:1 trend:2 recognition:2 animacy:1 walking:1 richards:1 blocking:1 observed:6 inserted:1 role:1 capture:3 movement:2 removed:1 russell:1 valuable:1 ran:4 developmental:2 environment:16 agency:2 complexity:4 constrains:1 asked:1 dynamic:3 irrational:1 depend:2 purely:1 whitman:1 indirect:1 herself:1 represented:2 various:1 forced:1 describe:3 artificial:1 formation:2 whose:1 richer:1 larger:1 apparent:1 ability:1 favor:2 think:1 sequence:13 pressing:2 net:1 propose:1 relevant:1 combining:1 dennett:1 achieve:6 representational:1 everyday:2 double:1 intl:2 generating:1 object:11 derive:1 depending:1 pose:1 qt:1 mansinghka:1 eq:2 strong:2 implemented:1 blanket:1 implies:1 direction:2 posit:1 shafto:1 meltzoff:3 attribute:3 stochastic:2 human:9 saxe:2 viewing:1 require:1 premise:1 argued:1 generalization:2 wall:1 extension:1 around:2 intentional:31 exp:1 cognition:3 predict:5 week:2 vary:3 proc:4 unexplained:1 si0:1 saw:4 sensitive:1 deductive:2 reflects:2 mit:2 csibra:5 clearly:1 always:3 reaching:1 occupied:2 avoid:1 varying:4 she:8 philosopher:1 likelihood:5 logically:1 contrast:3 spared:1 sense:1 helpful:1 inference:18 dependent:2 typically:6 initially:1 her:4 hidden:2 relation:3 koller:1 going:1 scholl:1 denoted:2 priori:1 plan:6 constrained:4 integration:1 animal:1 construct:1 ng:1 icml:2 inevitable:1 future:2 others:1 stimulus:14 few:1 employ:1 ve:1 individual:1 cognitively:1 intended:2 attempt:2 interest:1 highly:3 possibility:2 violation:1 light:4 behind:1 necessary:2 jumping:2 fox:1 indexed:1 old:1 euclidean:1 circle:3 plotted:1 desired:6 causal:1 obst:8 theoretical:1 uncertain:1 increased:1 modeling:3 soft:2 obstacle:28 compass:1 rao:1 cost:6 deviation:1 delay:1 successful:1 graphic:1 too:2 optimally:1 motivating:1 engagement:1 confident:2 st:15 definitely:2 probabilistic:11 told:2 together:1 quickly:1 mouse:9 na:1 gergely:11 again:1 w1:8 squared:1 aaai:2 choose:3 woman:2 watching:1 cognitive:4 pretended:1 american:1 conf:2 inefficient:1 return:1 account:6 potential:1 includes:3 piece:1 performed:1 later:1 view:1 observer:18 observing:3 recover:1 participant:1 bayes:2 kautz:1 tremoulet:1 became:1 who:2 efficiently:1 likewise:1 judgment:3 correspond:1 identify:1 dissociation:2 generalize:1 bayesian:9 trajectory:10 viapoint:1 straight:6 explain:3 reach:3 colleague:3 static:1 rational:12 gain:1 massachusetts:1 logical:2 knowledge:6 subsection:2 infers:3 organized:1 subtle:1 routine:1 impoverished:1 actually:1 reflecting:1 sophisticated:1 appears:2 follow:2 response:2 maximally:1 specify:3 box:7 just:6 implicit:1 lastly:2 working:1 hand:6 irl:3 o:1 perhaps:1 scientific:1 grounding:1 effect:1 inductive:2 hence:2 assigned:1 stance:8 jbt:1 conditionally:1 adjacent:1 wrapped:1 konrad:1 trying:1 complete:1 demonstrate:1 interpreting:3 bring:1 reasoning:19 novel:5 charles:1 sigmoid:2 absorbing:1 unobstructed:1 physical:1 subgoals:2 discussed:1 interpretation:2 interpret:2 forehead:1 refer:2 unfamiliar:1 expressing:1 blocked:1 cambridge:1 ai:1 grid:3 had:2 moving:2 robot:2 longer:1 etc:1 something:3 closest:1 own:4 showed:2 posterior:2 perspective:1 occasionally:1 route:3 certain:3 incapable:1 life:1 joshua:1 seen:1 captured:1 greater:3 additional:3 surely:1 paradigm:3 shortest:4 multiple:3 infer:6 adapt:2 long:1 equally:1 qg:2 schematic:1 prediction:9 basic:1 ko:1 liao:1 vision:1 expectation:4 circumstance:3 essentially:1 iteration:1 represent:2 achieved:1 addition:1 diagram:1 source:2 comment:1 subject:6 tend:2 spirit:1 odds:2 near:1 presence:1 intermediate:2 affect:1 fit:5 psychology:3 counterbalanced:1 opposite:1 suboptimal:1 tradeoff:1 shift:1 vikash:1 whether:2 utility:5 passed:1 sprite:23 cause:1 action:37 generally:2 se:1 amount:1 tenenbaum:1 demonstrator:2 group:9 four:4 achieving:5 sum:1 deflection:1 inverse:4 uncertainty:1 place:1 almost:1 planner:1 putative:1 draw:2 comparable:1 illuminated:1 followed:2 distinguish:1 display:4 occur:1 constraint:10 scene:1 aspect:2 relatively:1 department:1 structured:2 according:1 combination:1 across:4 remain:1 increasingly:1 slightly:1 conspicuous:1 wi:1 adasdy:1 s1:3 simmel:1 psychologist:1 intuitively:1 invariant:5 previously:1 turn:1 needed:1 mind:2 end:11 informal:1 adopted:1 available:3 apply:1 original:1 assumes:2 running:3 include:4 top:1 graphical:1 sw:1 embodies:1 graded:2 suddenly:2 move:2 quantity:1 looked:2 said:1 rationally:1 separate:1 thank:1 capacity:5 street:1 entity:2 athena:1 chris:1 kemp:1 reason:3 assuming:1 length:1 relationship:1 minimizing:1 trace:1 negative:1 kir:1 reliably:1 policy:9 perform:4 allowing:1 observation:8 curved:14 pat:1 situation:3 looking:1 head:1 incorporate:1 varied:2 arbitrary:1 aly:1 inferred:2 rebecca:1 rating:6 inverting:2 cast:1 pair:1 introduced:1 specified:2 learned:3 adult:9 bar:2 perception:1 teleological:1 appeared:2 challenge:1 memory:1 explanation:1 belief:7 power:2 event:1 critical:1 treated:1 natural:1 predicting:1 ormoneit:1 representing:3 technology:1 review:1 understanding:3 literature:1 prior:10 marginalizing:1 relative:1 highlight:1 analogy:1 versus:2 sixteen:1 vg:4 age:1 illuminating:1 agent:84 degree:2 sufficient:1 consistent:6 s0:6 principle:5 systematically:1 placed:2 free:3 bias:1 allow:2 institute:1 explaining:1 wide:1 taking:3 slice:1 dimension:1 world:19 transition:2 maze:1 qn:1 valid:1 qualitatively:1 reinforcement:2 made:1 simplified:1 far:2 social:1 approximate:1 preferred:1 cheese:7 suggestive:1 assumed:4 kobi:1 imitation:5 why:1 reviewed:1 learn:1 nature:2 robust:2 expanding:1 complex:27 domain:5 did:4 hierarchically:1 animation:1 edition:1 convey:1 fig:12 psychometric:1 causality:1 fashion:2 precision:1 inferring:9 deterministically:1 candidate:2 governed:1 perceptual:2 infancy:2 learns:1 down:1 specific:4 showing:2 habituated:2 evidence:3 essential:1 underconstrained:2 effectively:1 importance:1 conditioned:1 bir:2 simply:2 likely:8 cheaply:1 deducing:1 desire:8 unexpected:2 shon:1 environmental:9 constantly:1 ma:2 conditional:2 goal:87 marked:1 viewed:1 month:1 towards:3 feasible:1 change:6 typical:1 except:1 called:1 total:1 pas:3 experimental:4 tendency:1 cond:13 select:2 internal:1 people:8 support:2 latter:1 assessed:1 hollow:1 heider:1 phenomenon:1 |
1,999 | 2,816 | Sequence and Tree Kernels
with Statistical Feature Mining
Jun Suzuki and Hideki Isozaki
NTT Communication Science Laboratories, NTT Corp.
2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto,619-0237 Japan
{jun, isozaki}@cslab.kecl.ntt.co.jp
Abstract
This paper proposes a new approach to feature selection based on a statistical feature mining technique for sequence and tree kernels. Since
natural language data take discrete structures, convolution kernels, such
as sequence and tree kernels, are advantageous for both the concept and
accuracy of many natural language processing tasks. However, experiments have shown that the best results can only be achieved when limited small sub-structures are dealt with by these kernels. This paper discusses this issue of convolution kernels and then proposes a statistical
feature selection that enable us to use larger sub-structures effectively.
The proposed method, in order to execute efficiently, can be embedded
into an original kernel calculation process by using sub-structure mining algorithms. Experiments on real NLP tasks confirm the problem in
the conventional method and compare the performance of a conventional
method to that of the proposed method.
1 Introduction
Since natural language data take the form of sequences of words and are generally analyzed
into discrete structures, such as trees (parsed trees), discrete kernels, such as sequence
kernels [7, 1] and tree kernels [2, 5], have been shown to offer excellent results in the
natural language processing (NLP) field. Conceptually, these proposed kernels are defined
as instances of convolution kernels [3, 11], which provides the concept of kernels over
discrete structures.
However, unfortunately, experiments have shown that in some cases there is a critical issue
with convolution kernels in NLP tasks [2, 1, 10]. That is, since natural language data
contain many types of symbols, NLP tasks usually deal with extremely high dimension
and sparse feature space. As a result, the convolution kernel approach can never be trained
effectively, and it behaves like a nearest neighbor rule. To avoid this issue, we generally
eliminate large sub-structures from the set of features used. However, the main reason for
using convolution kernels is that we aim to use structural features easily and efficiently.
If their use is limited to only very small structures, this negates the advantages of using
convolution kernels.
This paper discusses this issue of convolution kernels, in particular sequence and tree ker-
nels, and proposes a new method based on statistical significant test. The proposed method
deals only with those features that are statistically significant for solving the target task,
and large significant sub-structures can be used without over-fitting. Moreover, by using sub-structure mining algorithms, the proposed method can be executed efficiently by
embedding it in an original kernel calculation process, which is defined by the dynamicprogramming (DP) based calculation.
2 Convolution Kernels for Sequences and Trees
Convolution kernels have been proposed as a concept of kernels for discrete structures,
such as sequences, trees and graphs. This framework defines the kernel function between
input objects as the convolution of ?sub-kernels?, i.e. the kernels for the decompositions
(parts or sub-structures) of the objects. Let X and Y be discrete objects. Conceptually,
convolution kernels K(X, Y ) enumerate all sub-structures occurring in X and Y and then
calculate their inner product, which is simply written as: K(X, Y ) = h?(X), ?(Y )i =
P
i ?i (X) ? ?i (Y ). ? represents the feature mapping from the discrete object to the feature
space; that is, ?(X) = (?1 (X), . . . , ?i (X), . . .). Therefore, with sequence kernels, input
objects X and Y are sequences, and ?i (X) is a sub-sequence; with tree kernels, X and
Y are trees, and ?i (X) is a sub-tree. Up to now, many kinds of sequence and tree kernels
have been proposed for a variety of different tasks. To clarify the discussion, this paper
basically follows the framework of [1], which proposed a gapped word sequence kernel,
and [5], which introduced a labeled ordered tree kernel.
We can treat that sequence is one of the special form of trees if we say sequences are rooted
by their last symbol and each node has one child each of a previous symbol. Thus, in this
paper, the word ?tree? is always including sequence. Let L be a set of finite symbols. Then,
let Ln be a set of symbols whose sizes are n and P (Ln ) be a set of trees that are constructed
by Ln . The meaning of ?size? in this paper is the the number of nodes in a tree. We denote
a tree u ? P (Ln1 ) whose size is n or less, where ?nm=1 Lm = Ln1 . Let T be a tree and
sub(T ) be a function that returns a set of all possible sub-trees in T . We define a function
Cu (t) that returns a constant, ?(0 < ? ? 1), if the sub-tree t covers u with the same root
symbol. For example, a sub-tree ?a-b-c-d?, where ?a?, ?b?, ?c? and ?d? represent symbols
and ?-? represents an edge between symbols, covers sub-trees ?d?, ?a-c-d? and ?b-d?. That
is, Cu (t) = ? if u matches t allowing the node skip, 0 otherwise. We also define a function
?u (t) that returns the difference of size of sub-trees t and u. For example, if t = a-b-c-d
and u = a-b, then ?u (t) = |4 ? 2| = 2.
Formally, sequence and tree kernels can be defined as the same form as
K SK,TK (T 1 , T 2 ) =
X
X
1
1
u?P (Ln
1 ) t ?sub(T )
Cu (t1 )?u (t
1
)
X
2
Cu (t2 )?u (t ) .
(1)
t2 ?sub(T 2 )
Note that this formula is also including the node skip framework that is generally introduced
only in sequence kernels[7, 1]; ? is the decay factor that handles the gap present in sub-trees
u and t.
Sequence and tree kernels are defined in recursive formula to calculate them efficiently
instead of the explicit calculation of Equation (1). Moreover, when implemented, these
kernels can calculated in O(n|T 1 ||T 2 |), where |T | represents the number of nodes in T , by
using the DP technique. Note, that if the kernel does not use size restriction, the calculation
cost becomes O(|T 1 ||T 2 |).
3 Problem of Applying Convolution Kernels to Real tasks
According to the original definition of convolution kernels, all of the sub-structures are
enumerated and calculated for the kernels. The number of sub-structures in the input object usually becomes exponential against input object size. The number of symbols, |L|,
is generally very large number (i.e. more than 10,000) since words are treated as symbols.
Moreover, the appearance of sub-structures (sub-sequences and sub-trees) are highly correlated with that of sub-structures of sub-structures themselves. As a result, the dimension of
feature space becomes extremely high, and all kernel values K(X, Y ) are very small compared to the kernel value of the object itself, K(X, X). In this situation, the convolution
kernel approach can never be trained effectively, and it will behave like a nearest neighbor
rule; we obtain a result that is very precise but with very low recall. The details of this issue
were described in [2].
To avoid this, most conventional methods use an approach that involves smoothing the
kernel values or eliminating features based on the sub-structure size. For sequence kernels,
[1] use a feature elimination method based on the size of sub-sequence n. This means that
the kernel calculation deals only with those sub-sequences whose length is n or less. As
well as the sequence kernel, [2] proposed a method that restricts the features based on subtree depth for tree kernels. These methods seem to work well on the surface, however, good
results can only be achieved when n is very small, i.e. n = 2 or 3. For example, n = 3
showed the best performance for parsing in the experimental results of [2], and n = 2
showed the best for the text classification task in [1]. The main reason for using these
kernels is that they allow us to employ structural features simply and efficiently. When
only small-sized sub-structures are used (i.e. n = 2 or 3), the full benefits of the kernels
are missed.
Moreover, these results do not mean that no larger-sized sub-structures are useful. In some
cases we already know that certain larger sub-structures can be significant features for
solving the target problem. That is, significant larger sub-structures, which the conventional
methods cannot deal with efficiently, should have the possibility of further improving the
performance. The aim of the work described in this paper is to be able to use any significant
sub-structure efficiently, regardless of its size, to better solve NLP tasks.
4 Statistical Feature Mining Method for Sequence and Tree Kernels
This section proposes a new approach to feature selection, which is based on statistical
significant test, in contrast to the conventional methods, which use sub-structure size.
To simplify the discussion, we restrict ourselves to dealing hereafter with the twoclass (positive and negative) supervised classification problem. In our approach, we
test the statistical deviation of all sub-structures in the training samples between the
appearance of positive samples and negative samples, and then, select only the substructures which are larger than a certain threshold ? as features. This allows us
to select only the statistically significant sub-structures. In this paper, we explains
our proposed method by using the chi-squared (?2 ) value as a statistical metric.
We note, however, we can use many
types of statistical metrics in our proposed
Table 1: Contingency table and notation
method.
for the chi-squared value
P
First, we briefly explain how to calculate
c
c?
row
the ?2 value by referring to Table 1. c and
u
Ouc Ou?c
Ou
c? represent the names of classes, c for the
?
Ou?c Ou?c?
Ou?
P u
positive class and c? for the negative class.
Oc?
N
column Oc
Oij , where i ? {u, u
?} and j ? {c, c?}, rep-
resents the number of samples in each case. Ou?c , for instance, represents the number
of u that appeared in c?. Let N be the total number of training samples. Since N and
Oc are constant for training samples, ?2 can be obtained as a function of Ou and Ouc .
The ?2 value expresses
the normalized deviation of the observation from the expectation:
P
chi(Ou , Ouc ) = i?{u,?u},j?{c,?c} (Oij ? Eij )2 /Eij , where Eij = n ? Oi /n ? Oj /n, which
represents the expectation. We simply represent chi(Ou , Ouc ) as ?2 (u).
In the kernel calculation with the statistical feature selection, if ?2 (u) < ? holds, that is, u
is not statistically significant, then u is eliminated from the features, and the value of u is
presumed to be 0 for the kernel value. Therefore, the sequence and tree kernel with feature
selection (SK,TK+FS) can be defined as follows:
K SK,TK+FS (T 1 , T 2 ) =
X
X
1
1
u?{u|? ??2 (u),u?P (Ln
1 )} t ?sub(T )
Cu (t1 )?u (t
1
)
X
2
Cu (t2 )?u (t ) .
t2 ?sub(T 2 )
(2)
The difference with their original kernels is simply the condition of the first summation,
which is ? ? ?2 (u).
The basic idea of using a statistical metric to select features is quite natural, but it is not
a very attractive approach. We note, however, it is not clear how to calculate that kernels
efficiently with a statistical feature selection. It is computationally infeasible to calculate
?2 (u) for all possible u with a naive exhaustive method. In our approach, we take advantage of sub-structure mining algorithms in order to calculate ?2 (u) efficiently and to
embed statistical feature selection to the kernel calculation. Formally, sub-structure mining is to find the complete set, but no-duplication, of all significant (generally frequent)
sub-structures from dataset. Specifically, we apply combination of a sequential pattern
mining technique, PrefixSpan [9], and a statistical metric pruning (SMP) method, Apriori
SMP [8]. PrefixSpan can substantially reduce the search space of enumerating all significant sub-sequences. Briefly saying, it finds any sub-sequences uw whose size is n, by
searching a single symbol w in the projected database of the sub-sequence (prefix) u of
size n ? 1. The projected database is a partial database which only contains all postfixes
(pointers in the implementation) of appeared the prefix u in the database. It starts searching from n = 1, that is, it enumerates all the significant sub-sequences by the recursive
calculation of pattern-growth, searching in the projected database of prefix u and adding a
symbol w to u, and prefix-projection, making projected database of uw.
Before explaining the algorithm of the proposed kernels, we introduce the upper bound of
the ?2 value. The upper bound of the ?2 value of a sequence uv, which is the concatenation
of sequences u and v, can be calculated by the value of the contingency table of the prefix
u [8]: ?2 (uv) ? ?
b2 (u) = max (chi(Ouc , Ouc ), chi(Ou ? Ouc , 0)) . This upper bound
2
indicates that if ?
b (u) < ? holds, no (super-)sequences uv, whose prefix is u, can be larger
than threshold, ? ? ?2 (uv). In our context, we can eliminate all (super-)sequences uv
from candidates of the feature without the explicit evaluation of uv.
Using this property in the PrefixSpan algorithm, we can eliminate to evaluate all the (super)sequences uv by evaluating the upper bound of sequence u. After finding the number of
individual symbol w appeared in projected database of u, we evaluate uw in the following
three conditions: (1) ? ? ?2 (uw), (2) ? > ?2 (uw), ? > ?
b2 (uw), and (3) ? > ?2 (uw),
? ??
b2 (uw). With condition (1), sub-sequence uw is selected as the feature. With condition (2), uw is pruned, that is, all uwv are also pruned from search space. With condition
(3), uw is not a significant, however, uwv can be a significant; thus uw is not selected as
features, however, mining is continue to uwv. Figure 1 shows an example of searching
and pruning the sub-sequences to select significant features by the PrefixSpan with SMP
algorithm.
?
class
+1
-1
+1
-1
-1
-1
.
.
.
training data
abcdae
caefbcd
dbcae
bacbb
acad
dabdec
.
.
.
Projected database
1:2 3:5 5:2
2:3 4:3 6:3
a 3.2
1.5
Projected database
1:3 4:5
2:6 6:4
b2.2
0.5
b4.8
0.2
3.2
1.8
c 0.5
0.1 d 1.5 e1.5
2.2
c 0.5
0.1 d 1.5
threshold ? = 1.00
c 2.5
1.9
d 0.9
0.9
e5.2
1.8
3.2
2.2
1.2
c 0.4
0.3 d 1.5 a 0.5 b 1.2
3.2
c 0.2
0.1 e 1.5
2.2
a 1.5
1.5 e 1.5
d0.5
0.1
?? 2 ( u ')
n=1
w ? ( u ')
n=2
Projected database
Sample id: pointer
Ex. 2:3
2
n=3 1, ? 2 (u ) ? ?
select as a feature
2, ? 2 (u ) < ? , and ?? 2 (u ) < ? continue
3,
?? 2 (u ) ? ? pruning
Figure 1: Example of searching and pruning the sub-sequences by PrefixSpan with SMP
algorithm
T1
b7 c8
a9
a6
d1 d3
b2 d4 a5
String encoding under
the postorder traversal :
(((d (b) d (d a) a) b c) a)
sub-tree
a9
b7
b7 c8
b7
a6 d1 d3
d1 d3
a6
*
b2
d4
d4 a5
d (b) d a) b d d (d) a) b c) a) d a ) b
Figure 2: Example of the string encoding for trees under the postorder traversal
The famous tree mining algorithm [12] cannot be simply applied as a feature selection
method for the proposed tree kernels, because this tree mining executes preorder search of
trees while tree kernels calculate the kernel in postorder. Thus, we take advantage of the
string (sequence) encoding method for trees and treat them in sequence kernels. Figure 2
shows an example of the string encoding for trees under the postorder traversal. The brackets indicate the hierarchical relation between their left and right hand side nodes. We treat
these brackets as a special symbol during the sequential pattern mining phase. Sub-trees
are evaluated as the same if and only if the string encoded sub-sequences are exactly the
same including brackets. For example, ?d ) b ) a? and ?d b ) a? are different.
We previously said that sequence can be treated as one of trees. We also encode in the case
of sequence; for example a sequence ?a b c d? is encoded in ?((((a) b) c) d)?. That is, we
can define sequence and tree kernels with our feature selection method in the same form.
Sequence and Tree Kernels with Statistical Feature Mining: Sequence and Tree kernels
with our proposed feature selection method is defined in the following equations.
X
X
Hn (Ti1 , Tj2 ; D)
(3)
K SK,TK+FS (T 1 , T 2 ; D) =
1?i?|T 1 | 1?j?|T 2 |
D represents the training data, and i and j represent indices of nods in postorder of T 1
and T 2 , respectively. Let Hn (Ti1 , Tj2 ; D) be a function that returns the sum value of all
statistically significant common sub-sequences u if t1i = t2j and |u| ? n.
X
Ju (Ti1 , Tj2 ; D),
(4)
Hn (Ti1 , Tj2 ; D) =
u??n (Ti1 ,Tj2 ;D)
where ?n (Ti1 , Tj2 ; D) represents a set of sub-sequences, which is |u| ? n, that satisfy the
above condition 1. Then, let Ju (Ti1 , Tj2 ; D), Ju0 (Ti1 , Tj2 ; D) and Ju00 (Ti1 , Tj2 ; D) be functions that calculate the value of the common sub-sequences between Ti1 and Tj2 recursively.
?
b n (T 1 , T 2 ; D),
Ju0 (Ti1 , Tj2 ; D) ? Iw (t1i , t2j ) if uw ? ?
1
2
i
j
Juw (Ti , Tj ) =
(5)
0 otherwise,
where Iw (t1i , t2j ) is a function that returns 1 iff t1i = w and t2j = w, and 0 otherwise.
b n (T 1 , T 2 ; D) is a set of sub-sequences, which is |u| ? n, that satisfy condition (3). We
?
i
j
introduce a special symbol ? to represent an ?empty sequence?, and define ?w = w and
|?w| = 1.
?
?1 if u = ?,
(6)
Ju0 (Ti1 , Tj2 ; D) = 0 if j = 0 and u 6= ?,
??J 0 (T 1 , T 2 ; D) + J 00 (T 1 , T 2 , D) otherwise,
u i
u
j?1
i
j?1
?
0 if i = 0,
00
1
2
Ju (Ti , Tj ; D) =
(7)
1
1
, Tj2 ; D) otherwise.
?Ju00 (Ti?1
, Tj2 ; D) + Ju (Ti?1
The following equations are introduced to select a set of significant sub-sequences.
b n (Ti1 , Tj2 ; D), ? ? ?2 (u), u|u| ? ?|u|?1 ans(ui )}
?n (Ti1 , Tj2 ; D) = {u | u ? ?
i=1
(8)
|u|?1
u|u| ? ?i=1 ans(ui ) evaluates if a sub-sequence u is complete sub-tree, where ans(ui )
returns ancestor of the node ui . For example, ?d ) b a? is not a complete subtree, because
the last node ?a? is not an ancestor of ?d? and ?b?.
?
b 0n (T 1 , T 2 ; D), t1 ) ? {t1 } if t1 = t2 ,
?n (?
1
2
b
i
i
j
i
i
j
(9)
?n (Ti , Tj ; D) =
? otherwise,
where ?n (F, w) = {uw | u ? F, ? ? ?
b2 (uw), |uw| ? n}, and F represents a set of subb n (T 1 , T 2 ; D) have only sub-sequences u that
sequences. Note that ?n (Ti1 , Tj2 ; D) and ?
i
j
2
2
satisfy ? ? ? (uw) and ? ? ?
b (uw), respectively, iff t1i = t2j and |uw| ? n; otherwise
they become empty sets.
The following two equations are introduced for recursive the set operation to calculate
b n (T 1 , T 2 ; D).
?n (Ti1 , Tj2 ; D) and ?
i
j
?
b 0n (Ti1 , Tj2 ; D) = ? 0 if1j =2 0,
?
(10)
b n (T , T ; D) ? ?
b 00n (T 1 , T 2 ; D) otherwise,
?
i
j?1
i
j?1
?
b 00n (Ti1 , Tj2 ; D) = ? 00 if 1 i = 20 ,
(11)
?
b n (T 1 , T 2 ; D) otherwise.
b n (T , T ; D) ? ?
?
j
i?1
j
i?1
In the implementation, ?2 (uw) and ?
b2 (uw), where uw represents a concatenation of a
sequence u and a symbol w, can be calculated by a set of pointers of u against data and the
number of appearance of w in backside of the pointers. We note that the set of pointers of
uw can be simply obtained from previous search of u. With condition (1), uw is stored in
b n . With condition (3), uw is only stored in ?
bn .
?n and ?
There are some technique in order to calculate kernel faster in the implementation. For
example, since ?2 (u) and ?
?2 (u) are constant against the same data, we only have to calculate them once. We store the internal search results of PrefixSpan with SMP algorithm in
a TRIE structure. After that, we look in that results in TRIE instead of explicitly calculate
?2 (u) again when the kernel finds the same sub-sequence. Moreover, when the projected
database is exactly the same, these sub-sequences can be merged since the value of ?2 (uv)
and ?
?2 (uv) for any postfix v are exactly the same. Moreover, we introduce a ?transposed
index? for fast evaluation of ?2 (u) and ?
?2 (u). By using that, we only have to look up that
index of w to evaluate whether or not any uw are significant features.
Equations (4) to (7) can be performed in the same as the original DP based kernel calculation. The recursive set operations of Equations (9) to (11) can be executed as well as
Table 2: Experimental Results
n
SK+FS
SK
TK+FS
TK
BOW-K
Question Classification
1
2
3
4 ?
- .823 .827 .824 .822
- .808 .818 .808 .797
- .812 .815 .812 .812
- .802 .802 .797 .783
.754 .792 .790 .778
-
1
.717
Subjectivity Detection
2
3
4 ?
.822 .839 .841 .842
.823 .824 .809 .772
.834 .857 .854 .856
.842 .850 .830 .755
729 .715 .649
-
1
.740
Polarity Identification
2
3
4 ?
.824 .838 .839 .839
.835 .835 .833 .789
.830 .832 .835 .833
.828 .827 .820 .745
.810 .822 .795
-
Equations (5) to (7). Moreover, calculating ?2 (u) and ?
?2 (u) with sub-structure mining algorithms allow to calculate the same order of the DP based kernel calculation. As a result,
statistical feature selection can be embedded in original kernel calculation based on the DP.
Essentially, the worst case time complexity of the proposed method will become exponential, since we enumerate individual sub-structures in sub-structure mining phase. However,
actual calculation time in the most cases of our experiments is even faster than original
kernel calculation, since search space pruning efficiently remove vain calculation and the
implementation techniques briefly explained above provide practical calculation speed.
We note that if we set ? = 0, which means all features are dealt with kernel calculation, we
can get exactly the same kernel value as the original tree kernel.
5 Experiments and Results
We evaluated the performance of the proposed method in actual NLP tasks, namely English
question classification (EQC), subjectivity detection (SD) and polarity identification (PI)
tasks. These tasks are defined as a text categorization task: it maps a given sentence into
one of the pre-defined classes. We used data provided by [6] for EQC, that contains about
5500 questions with 50 question types. SD data was created from Mainichi news articles,
and the size was 2095 sentences consisting of 822 subjective sentences. PI data has 5564
sentences with 2671 positive opinion. By using these data, we compared the proposed
method (SK+FS and TK+FS) with a conventional method (SK or TK), as discussed in
Section 3, and with bag-of-words (BOW) Kernel (BOW-K)[4] as baseline methods. We
used word sequences for input objects of sequence kernels and word dependency trees for
tree kernels.
Support Vector Machine (SVM) was selected as the kernel-based classifier for training and
classification with a soft margin parameter C = 1000. We used the one-vs-rest classifier of
SVM as the multi-class classification method for EQC. We evaluated the performance with
label accuracy by using ten-fold cross validation: eight for training, one for development
and remaining one for test set. The parameter ? and ? was automatically selected from
the value set of ? = {0.1, 0.3, 0.5, 0.7, 0.9} and ? = {3.84, 6.63} by the development
test. Note that these two values represent the 10% and 5% levels of significance in the ?2
distribution with one degree of freedom, which used the ?2 significant test.
Tables 2 shows our experimental results. where n in each table indicates the restriction of
the sub-structure size, and n = ? means all possible sub-structures are used. As shown in
this table, SK or TK achieve maximum performance when n = 2 or 3. The performance
deteriorates considerably once n exceeds 4 or more. This implies that larger sub-structures
degrade classification performance, which showed the same tendency as in the previous
studies discussed in Section 3. This is evidence of over-fitting in learning. On the other
hand, SK+FS and TK+FS provided consistently better performance than the conventional
methods. Moreover, the experiments confirmed one important fact: in some cases, maximum performance was achieved with n = ?. This indicates that certain sub-sequences
created using very large structures can be extremely effective. If the performance is improved by using a larger n, this means that significant features do exist. Thus, we can
improve the performance of some classification problems by dealing with larger substructures. Even if optimum performance was not achieved with n = ?, the difference from the
performance of a smaller n is quite small compared to that of SK and TK. This indicates
that our method is very robust against sub-structure size.
6 Conclusions
This paper proposed a statistical feature selection method for sequence kernels and tree
kernels. Our approach can select significant features automatically based on a statistical
significance test. The proposed method can be embedded in the original DP based kernel
calculation process by using sub-structure mining algorithms.
Our experiments demonstrated that our method is superior to conventional methods. Moreover, the results indicate that complex features exist and can be effective. Our method can
employ them without over-fitting problems, which yields benefits in terms of concept and
performance.
References
[1] N. Cancedda, E. Gaussier, C. Goutte, and J.-M. Renders. Word-Sequence Kernels. Journal of
Machine Learning Research, 3:1059?1082, 2003.
[2] M. Collins and N. Duffy. Convolution kernels for natural language. In Proc. of Neural Information Processing Systems (NIPS?2001), 2001.
[3] D. Haussler. Convolution kernels on discrete structures. In Technical Report UCS-CRL-99-10.
UC Santa Cruz, 1999.
[4] T. Joachims. Text Categorization with Support Vector Machines: Learning with Many Relevant
Features. In Proc. of European Conference on Machine Learning (ECML ?98), pages 137?142,
1998.
[5] H. Kashima and T. Koyanagi. Kernels for Semi-Structured Data. In Proc. 19th International
Conference on Machine Learning (ICML2002), pages 291?298, 2002.
[6] X. Li and D. Roth. Learning Question Classifiers. In Proc. of the 19th International Conference
on Computational Linguistics (COLING 2002), pages 556?562, 2002.
[7] H. Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, and C. Watkins. Text Classification
Using String Kernel. Journal of Machine Learning Research, 2:419?444, 2002.
[8] S. Morishita and J. Sese. Traversing Itemset Lattices with Statistical Metric Pruning. In Proc.
of ACM SIGACT-SIGMOD-SIGART Symp. on Database Systems (PODS?00), pages 226?236,
2000.
[9] J. Pei, J. Han, B. Mortazavi-Asl, and H. Pinto. PrefixSpan: Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth. In Proc. of the 17th International Conference on
Data Engineering (ICDE 2001), pages 215?224, 2001.
[10] J. Suzuki, Y. Sasaki, and E. Maeda. Kernels for Structured Natural Language Data. In Proc. of
the 17th Annual Conference on Neural Information Processing Systems (NIPS2003), 2003.
[11] C. Watkins. Dynamic alignment kernels. In Technical Report CSD-TR-98-11. Royal Holloway,
University of London Computer Science Department, 1999.
[12] M. J. Zaki. Efficiently Mining Frequent Trees in a Forest. In Proc. of the 8th International
Conference on Knowledge Discovery and Data Mining (KDD?02), pages 71?80, 2002.
| 2816 |@word cu:6 briefly:3 eliminating:1 advantageous:1 lodhi:1 bn:1 decomposition:1 tr:1 recursively:1 contains:2 hereafter:1 prefix:7 subjective:1 written:1 parsing:1 cruz:1 kdd:1 remove:1 v:1 selected:4 t2j:5 pointer:5 provides:1 node:8 constructed:1 become:2 fitting:3 symp:1 introduce:3 presumed:1 themselves:1 seika:1 multi:1 chi:6 automatically:2 actual:2 becomes:3 provided:2 moreover:9 notation:1 kind:1 substantially:1 string:6 finding:1 ti:5 growth:2 exactly:4 classifier:3 t1:6 positive:4 before:1 engineering:1 treat:3 sd:2 acad:1 encoding:4 id:1 itemset:1 co:1 limited:2 statistically:4 trie:2 practical:1 recursive:4 ker:1 projection:1 word:8 pre:1 get:1 cannot:2 selection:12 context:1 applying:1 restriction:2 conventional:8 map:1 demonstrated:1 roth:1 regardless:1 pod:1 rule:2 haussler:1 embedding:1 handle:1 searching:5 target:2 t1i:5 labeled:1 database:12 worst:1 calculate:13 news:1 ui:4 complexity:1 cristianini:1 gapped:1 traversal:3 dynamic:1 trained:2 solving:2 easily:1 fast:1 effective:2 london:1 exhaustive:1 saunders:1 whose:5 quite:2 larger:9 solve:1 encoded:2 say:1 otherwise:9 itself:1 vain:1 a9:2 sequence:65 advantage:3 product:1 frequent:2 relevant:1 bow:3 iff:2 achieve:1 postorder:5 empty:2 optimum:1 categorization:2 object:9 tk:11 nearest:2 implemented:1 skip:2 involves:1 indicate:2 nod:1 implies:1 merged:1 ucs:1 enable:1 opinion:1 elimination:1 explains:1 mortazavi:1 summation:1 enumerated:1 clarify:1 hold:2 mapping:1 lm:1 proc:8 bag:1 label:1 iw:2 always:1 aim:2 super:3 avoid:2 encode:1 joachim:1 consistently:1 indicates:4 contrast:1 baseline:1 eliminate:3 relation:1 ancestor:2 issue:5 classification:9 proposes:4 development:2 smoothing:1 special:3 uc:1 apriori:1 field:1 once:2 never:2 eliminated:1 represents:9 look:2 t2:5 report:2 simplify:1 employ:2 individual:2 phase:2 ourselves:1 consisting:1 freedom:1 detection:2 a5:2 mining:19 highly:1 possibility:1 evaluation:2 alignment:1 analyzed:1 bracket:3 tj:3 edge:1 partial:1 traversing:1 tree:52 taylor:1 instance:2 column:1 soft:1 cover:2 a6:3 lattice:1 cost:1 deviation:2 stored:2 dependency:1 considerably:1 cho:1 referring:1 ju:4 international:4 squared:2 again:1 nm:1 hn:3 return:6 li:1 japan:1 b2:8 satisfy:3 explicitly:1 performed:1 root:1 start:1 substructure:2 oi:1 accuracy:2 efficiently:12 yield:1 conceptually:2 dealt:2 famous:1 identification:2 basically:1 confirmed:1 executes:1 explain:1 definition:1 against:4 evaluates:1 subjectivity:2 transposed:1 dataset:1 recall:1 enumerates:1 knowledge:1 ou:10 zaki:1 supervised:1 improved:1 execute:1 evaluated:3 hand:2 defines:1 name:1 concept:4 asl:1 contain:1 normalized:1 laboratory:1 deal:4 attractive:1 during:1 rooted:1 oc:3 d4:3 complete:3 meaning:1 nips2003:1 smp:5 common:2 superior:1 behaves:1 b4:1 jp:1 discussed:2 significant:21 uv:9 language:7 shawe:1 han:1 surface:1 dynamicprogramming:1 showed:3 store:1 corp:1 certain:3 rep:1 continue:2 isozaki:2 semi:1 full:1 kyoto:1 d0:1 ntt:3 technical:2 match:1 exceeds:1 calculation:18 offer:1 cross:1 faster:2 e1:1 basic:1 essentially:1 metric:5 expectation:2 kernel:87 represent:6 achieved:4 rest:1 sigact:1 duplication:1 seem:1 structural:2 variety:1 b7:4 restrict:1 inner:1 idea:1 reduce:1 twoclass:1 ti1:18 enumerating:1 whether:1 f:9 render:1 soraku:1 enumerate:2 useful:1 generally:5 clear:1 santa:1 ten:1 exist:2 restricts:1 deteriorates:1 discrete:8 express:1 negates:1 threshold:3 d3:3 uw:26 graph:1 icde:1 sum:1 saying:1 missed:1 bound:4 fold:1 annual:1 speed:1 extremely:3 c8:2 pruned:2 cslab:1 structured:2 department:1 according:1 combination:1 smaller:1 making:1 explained:1 ln:5 equation:7 computationally:1 previously:1 goutte:1 discus:2 ln1:2 know:1 resents:1 operation:2 apply:1 eight:1 hierarchical:1 kashima:1 original:9 remaining:1 nlp:6 linguistics:1 calculating:1 sigmod:1 parsed:1 hikaridai:1 already:1 question:5 said:1 dp:6 concatenation:2 gun:1 degrade:1 reason:2 length:1 index:3 polarity:2 gaussier:1 unfortunately:1 executed:2 sigart:1 negative:3 implementation:4 tj2:20 pei:1 allowing:1 upper:4 convolution:17 observation:1 finite:1 behave:1 ecml:1 situation:1 communication:1 precise:1 postfix:2 introduced:4 namely:1 preorder:1 sentence:4 hideki:1 nip:1 able:1 usually:2 pattern:5 maeda:1 appeared:3 including:3 oj:1 max:1 royal:1 critical:1 natural:8 treated:2 oij:2 improve:1 created:2 jun:2 naive:1 text:4 discovery:1 embedded:3 nels:1 validation:1 contingency:2 degree:1 sese:1 article:1 kecl:1 pi:2 row:1 last:2 english:1 infeasible:1 side:1 allow:2 neighbor:2 explaining:1 sparse:1 benefit:2 dimension:2 calculated:4 depth:1 evaluating:1 suzuki:2 projected:10 pruning:6 confirm:1 dealing:2 search:6 sk:11 table:8 robust:1 correlated:1 improving:1 forest:1 e5:1 excellent:1 complex:1 uwv:3 european:1 significance:2 main:2 csd:1 child:1 sub:72 explicit:2 exponential:2 candidate:1 watkins:2 coling:1 formula:2 embed:1 symbol:16 decay:1 svm:2 evidence:1 sequential:3 effectively:3 adding:1 subtree:2 occurring:1 duffy:1 margin:1 gap:1 koyanagi:1 simply:6 appearance:3 eij:3 ordered:1 pinto:1 acm:1 sized:2 crl:1 specifically:1 total:1 sasaki:1 experimental:3 tendency:1 formally:2 select:7 holloway:1 internal:1 support:2 collins:1 evaluate:3 d1:3 ex:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.